Recently, product data management systems (PDM) are widely used to conduct complex multidisciplinary projects in various industrial domains. The PDM systems enable teams of designers, engineers, and managers to remotely communicate on a network, exchange and share common product information. To integrate CAD/CAM/CAE applications with the PDM systems and ensure their interoperability, a dedicated family of standards STEP (ISO 10303) has been developed and employed. The STEP defines an object-oriented language EXPRESS to formally specify information schemas as well as file formats to store and transfer product data driven by these schemas. These are clear text encoding format SPF and STEP-XML. Nowadays, with the development and widespread adoption of Web technologies, the JSON language is getting increasingly popular due to it being apropos for the tasks of object-oriented data exchange and storage, as well as its simple, easy to parse syntax. The paper explores the topic of the suitability of the JSON language for the unambiguous representation, storage and interpretation of product data. Under the assumption that the product data can be described by arbitrary information schemas in EXPRESS, formal rules for the producing JSON notation are proposed and presented. Explanatory examples are provided to illustrate the proposed rules. The results of computational experiments conducted confirm the advantages of the JSON format compared to SPF and STEP-XML, and motivate its widespread adoption when integrating software applications.
System software is a cornerstone of any software system, so building secure system software in accordance with requirements of certification authorities and state-of-the-art practices is an important scientific and technical problem. One of possible approaches to cope with the problem is to build a methodology for secure system software development including advanced scientific technologies and industry best practices. The paper presents current results achieved in building such methodology in the following directions. The first one is regulatory framework improvement including development of GOST R specifications defining requirements to formal models of access control policies and their formal verification. The second direction is design and verification of formal models of corresponding security functional requirements. The third direction is application of new and well established technologies of static and run-time analysis of systems software. The considered technologies include static analysis, fuzzing, functional and unit testing as well as testing the system software against formal models of its functional security requirements. The forth direction is development of methods for acquisition of results of all kinds of the analysis and for its analytical processing. All the directions are illustrated by practical examples of application of the methodology to development of Astra Linux operating system distribution that is certified according to the highest evaluation assurance levels.
As a result of the work focused on improving the efficiency of the information security system through the development of an ontological model and an approach based on it to ensure information security (IS) risk management, a flexible result was obtained, which is designed to ensure an increase in the efficiency of the information security system by reducing the time spent on managerial decision-making. At the end of the work, a comparative analysis of existing approaches and techniques to information security risk management and the described approach was carried out. Based on the developed ontology and approach, highly intelligent information security risk management systems and the information security system can be created on its basis.
This paper describes the results of the development of methods for marking text documents represented as a raster image. An important feature of the algorithms is the possibility wipe current document mark and embed another one. The study refers to structural marking algorithms based on vertical word shifts and brightness changes of certain areas of the words. Segmentation tools are used to obtain document layout, BCH codes for error correction, a likelihood maximization method for label extraction, and a neural network for perturbed words recovery. Testing has proved the practical applicability of the algorithms with printing and scanning.
The paper discusses the issues of training models for detecting computer attacks based on the use of machine learning methods. The results of the analysis of publicly available training datasets and tools for analyzing network traffic and identifying features of network sessions are presented sequentially. The drawbacks of existing tools and possible errors in the datasets formed with their help are noted. It is concluded that it is necessary to collect own training data in the absence of guarantees of the public datasets reliability and the limited use of pre-trained models in networks with characteristics that differ from the characteristics of the network in which the training traffic was collected. A practical approach to generating training data for computer attack detection models is proposed. The proposed solutions have been tested to evaluate the quality of model training on the collected data and the quality of attack detection in conditions of real network infrastructure.
Cryptographic protocols are used to establish a secure connection between “honest” agents who communicate strictly in accordance with the rules of the protocol. In order to make sure that the designed cryptographic protocol is cryptographically strong, various software tools are usually used. However, an adequate specification of a cryptographic protocol is usually presented as a set of requirements for the sequences of transmitted messages, including the format of such messages. The fulfillment of all these requirements leads to the fact that the formal specification for a real cryptographic protocol becomes cumbersome, as a result of which it is difficult to analyze it by formal methods. One of such rapidly developing tools for formal verification of cryptographic protocols is ProVerif. A distinctive feature of the ProVerif tool is that with large protocols, it often fails to analyze them, i.e. it can neither prove the security of the protocol nor refute it. In such cases, they resort either to the approximation of the problem, or to equivalent transformations of the program model in the ProVerif language, simplifying the ProVerif model. In this article, we propose a way to simplify the ProVerif specifications for AKE protocols using the El Gamal encryption scheme. Namely, we suggest equivalent transformations that allow us to construct a ProVerif specification that simplifies the analysis of the specification for the ProVerif tool. Experimental results for the Needham-Schroeder and Yahalom cryptoprotocols show that such an approach can be promising for automatic verification of real protocols.
Use of the formal methods in the development of the protocol implementations improves the quality of these implementations. The greatest benefit would result from the formalizing of the primary specifications usually contained in the RFC documents. This paper proposes a formal language for primary specifications of the cryptographic protocols, which aims at fulfilling (in a higher degree than in the existing approaches) the conditions required from primary specifications: they have to be concise, declarative, expressive, unambiguous and executable; in addition, the tools supporting the language have to provide the possibility of automatic deriving of the high quality test suites from the specifications. The proposed language is based on a machine (dubbed the C2-machine) specifically designed for the domain of the cryptographic protocols. Protocol specification is defined as a program of the C2-machine. This program consists of two parts: the definition of the protocol packets and the definition of the non-deterministic behavior of the protocol parties. One execution of the program simulates one run of the protocol. All the traces, which can be potentially produced by such execution, by definition, comprise all conformant traces of the protocol; in other words, the program of the C2-machine defines the operational contract of the protocol. In the paper, to make the design and operational principles of the C2-machine easier to understand, two abstractions of the C2‑machine are presented: C0-machine and C1-machine. C0-machine is used to explain the approach taken in expressing non-determinism of the protocols. The abstraction level of the C1-machine (which is a refinement of C0-machine) is enough to define the semantics of the basic C2-machine instructions. To enhance the readability of the programs and to reach the level of declarativeness and conciseness of the formalized notations used in some of conventional RFCs (e.g. TLS RFCs), C2-machine implements some syntactic tricks on top of the basic instructions. To use C2-specifications in the black-box testing, the special form of the C2-machine (C2-machine with excluded role) is presented. Throughout the paper the proposed concepts are illustrated by examples from the TLS protocol.
Development for stack-based architectures is usually done using legacy low level languages or assembly code, so there exists a problem of a high level programming language support for such architectures. In this paper we describe the development process of an LLVM/Clang-based C compiler for stack-based TF16 processor architecture. LLVM was used due to adaptation possibilities of its components for new architectures, such as disassembler, linker and debugger. Two compiler versions were developed. The first version generated code without using stack capabilities of TF16, treating it instead as a register-based architecture. This version was relatively easy to develop and it provided us a comparison point for the second one. In the second version we have implemented a platform independent stack scheduling algorithm that allowed us to generate code that makes use of the stack capabilities of the CPU. When comparing the two versions, a version that utilized stack capabilities generated code that was on average 35.7% faster and 50.8% smaller than the original version. The developed stack scheduling algorithm also allows to support other stack based architectures in LLVM toolchain
The sweeping evolution of the Internet of Things (IoT) requires the development of methods and tools for analyzing such devices. A significant part of similar devices run under operating systems (OS) of the Linux family. Direct application of existing tools for analyzing software (SW) of this class of devices is not always possible. In the process of researching embedded Linux OS, the ELF (embedded linux fuzz) tool was created, which is presented in this work. The article deals with the analysis of systems built exclusively on the basis of Linux kernels. ELF environment is designed for dynamic analysis of devices based on full-system emulation in QEMU. ELF was based on the following aspects: performing software testing and analysis of real devices in an environment as close as possible to their «native» execution environment; integration with existing fuzzing tools; the ability to conduct distributed analysis.
Modern graphics accelerators (GPUs) can significantly speed up the execution of numerical tasks. However, porting programs to graphics accelerators is not an easy task. Sometimes the transfer of programs to such accelerators is carried out by almost completely rewriting them (for example, when using the OpenCL technology). This raises the daunting task of maintaining two independent source codes. However, CUDA graphics accelerators, thanks to technology developed by NVIDIA, allow you to have a single source code for both conventional processors (CPUs) and CUDA. The machine code generated when compiling this single text depends on which compiler it is compiled with (the usual one, such as gcc, icc and msvc, or the compiler for CUDA, nvcc). However, in this single source code, you need to somehow tell the compiler which parts of this code to parallelize on shared memory. For the CPU, this is usually done using OpenMP and special pragmas to the compiler. For CUDA, parallelization is done in a completely different way. The use of the functional programming library developed by the authors allows you to hide the use of one or another parallelization mechanism on shared memory within the library and make the user source code completely independent of the computing device used (CPU or CUDA). This article shows how this can be done.
In this paper we propose a high-level approach to developing GPU applications based on the Vulkan API. The purpose of the work is to reduce the complexity of developing and debugging applications that implement complex algorithms on the GPU using Vulkan. The proposed approach uses the technology of code generation by translating a C++ program into an optimized implementation in Vulkan, which includes automatic shader generation, resource binding, and the use of synchronization mechanisms (Vulkan barriers). The proposed solution is not a general-purpose programming technology, but specializes in specific tasks. At the same time, it has extensibility, which allows to adapt the solution to new problems. For single input C++ program, we can generate several implementations for different cases (via translator options) or different hardware. For example, a call to virtual functions can be implemented either through a switch construct in a kernel, or through sorting threads and an indirect dispatching via different kernels, or through the so-called callable shaders in Vulkan. Instead of creating a universal programming technology for building various software systems, we offer an extensible technology that can be customized for a specific class of applications. Unlike, for example, Halide, we do not use a domain-specific language, and the necessary knowledge is extracted from ordinary C++ code. Therefore, we do not extend with any new language constructs or directives and the input source code is assumed to be normal C++ source code (albeit with some restrictions) that can be compiled by any C++ compiler. We use pattern matching to find specific patterns (or patterns) in C++ code and convert them to GPU efficient code using Vulkan. Pattern are expressed through classes, member functions, and the relationship between them. Thus, the proposed technology makes it possible to ensure a cross-platform solution by generating different implementations of the same algorithm for different GPUs. At the same time, due to this, it allows you to provide access to specific hardware functionality required in computer graphics applications. Patterns are divided into architectural and algorithmic. The architectural pattern defines the domain and behavior of the translator as a whole (for example, image processing, ray tracing, neural networks, computational fluid dynamics and etc.). Algorithmic pattern express knowledge of data flow and control and define a narrower class of algorithms that can be efficiently implemented in hardware. Algorithmic patterns can occur within architectural patterns. For example, parallel reduction, compaction (parallel append), sorting, prefix sum, histogram calculation, map-reduce, etc. The proposed generator works on the principle of code morphing. The essence of this approach is that, having a certain class in the program and transformation rules, one can automatically generate another class with the desired properties (for example, the implementation of the algorithm on the GPU). The generated class inherits from the input class and thus has access to all data and functions of the input class. Overriding virtual functions in generated class helps user to carefully connect generated code to the other Vulkan code written by hand. Shaders can be generated in two variants: OpenCL shaders for google “clspv” compiler and GLSL shaders for an arbitrary GLSL compiler. Clspv variant is better for code which intensively uses pointers and the GLSL generator is better if specific HW features are used (like hardware ray tracing acceleration). We have demonstrated our technology on several examples related to image processing and ray tracing on which we get 30-100 times acceleration over multithreaded CPU implementation.
Globalization urges the necessity of having world class leaders, and very frequently in its formation, is required to teach them computational tools to mainly improve their project management facilities. In order to give project management a greater quality, a very important point is to control and follow up all users’ diverse requirements, from the beginning to the end of each or any type of project: administrative, academic, engineering, software, video games, virtual and mixed reality, etc. This research shows a project requirements manager’s approach which allows knowing the specific requirements in each of its phases and give an advanced description of its various types and their traceability. This prototype promotes project requirement management best practices, and link them to the various areas of engineering, administration, planning and also with society, industry, commerce and academic sectors.
On September 19, 2017 an earthquake occurred in Mexico with epicenter in the limits of the states of Puebla and Morelos. It had a magnitude of 7.1 on the Richter scale. 31,090 homes were affected, of which 1659 with total damage. In order to attend to the contingency, the government of Morelos formed an inter-institutional committee in the first hours to carry out an initial diagnosis of the damage and to provide emergency services. This article presents a case study and lessons learned from the software engineering support for the development of a data-driven platform in the various phases of contingency response: census of damaged homes, identification of aid beneficiaries, determination of aid packages according to a damage assessment, logistics and follow-up of aid package delivery, data-driven decision making, and a public portal for open data and budget transparency.
The study of aircraft icing modes, in which it is necessary to take into account the effect of droplet crushing, is of great interest in calculating the icing of aircraft, optimizing the hydrophobic and anti-icing properties of coatings, and is relevant in a number of other practical applications. Of great practical importance is the development of high-performance methods for calculating the interaction of aerosol flows with a solid. This work is devoted to the development of a model of particle dynamics, as well as a model of fragmentation of supercooled droplets of an aerosol flow during its interaction with the surface of a streamlined body. Developed physical and mathematical models can be used in software systems for numerical modeling of aircraft icing.
The paper considers the possibility of the ICELIB library, developed at ISP RAS, for modeling ice formation processes on the surface of aircraft. As a test example to compare the accuracy of modeling the physical processes arising during the operation of the aircraft, the surface of a swept wing with a GLC-305 profile was studied. The possibilities of an efficient parallelization algorithm using a liquid film model, a dynamic mesh, and the geometric method of bisectors are discussed. The developed ICELIB library is a collection of three solvers. The first solver iceFoam1 is intended for preliminary estimation of the icing zones of the fuselage surface and aircraft’s swept wing. The change in the geometric shape of the investigated body is neglected, the thickness of ice formation is negligible. This version of the solver has no restrictions on the number of cores when parallelizing. The second version of solver iceDyMFoam2 is designed to simulate the formation of two types of ice, smooth (“Glaze ice”) and loose (“Rime ice"), for which the shape of ice often takes on a complex and bizarre appearance. The effect of changing the shape of the body on the icing process is taken into account. The limitations are related to the peculiarities of the construction of the mesh near the boundary layer of the streamlined body. Different algorithms are used to move the front and back edges of the film, which are optimized for their cases. The performance gain is limited and is achieved with a fixed number of cores. The third version of solver iceDyMFoam3 also allows you to take into account the effect of changes in the surface of a solid during the formation of ice on the icing process itself. For the case of smooth ice formation, the latest version of the solver is still inferior in its capabilities to the second one with complex ice surface shapes. In the third version, a somewhat simplified and more uniform approach is still used to calculate the motion of both boundaries of the ice film. The estimation of the calculation results with the data of the experiment from M. Papadakis for various airfoils and swept wing for the case of “Rime ice” is carried out. Good agreement with the experimental results was obtained.
The work is devoted to the numerical study of indoor microclimate. Accurately predicting the distributed microclimate inside the residential space equipped with a microclimate control system called the «smart house» allows to save thermal energy significantly. Mathematical modeling was carried out by means of the Navier-Stokes equation, the energy equation, the continuity equation. The system of equations used was closed using the k-w-sst turbulence model. The resulting numerical solution was performed in the Code_Saturn, which has a free license. The Salome free software package was used to build a grid. In this context, a second–order scheme (SOLU) was used to resolve the velocity field, a MULTIGRID scheme was used for the pressure field, automatic settings were used for the kinetic energy of turbulence and her dissipation and for the temperature field, the maximum number of iterations for each cycle was equal to 10000, the Solver Precision accuracy was 10-8. The SIMPLEC algorithm is used to obtain a connected solution of the momentum balance and continuity equations. The paper provides an example of numerical solution verification, which is showed the relative temperature deviation from the values obtained by other authors was no more than 0.8-1.2%. Numerical simulation of the air velocity field in the residential space showed values from 0.12 to 0.15 m/s. Based on the results of the obtained solution, an analysis of the saving of thermal energy was carried out when regulating the supply of heat.
This study is devoted to the problem of numerical modeling of the conjugate heat transfer in a closed-type power installation. The working elements of that are ribbed bimetallic tubes using the openFoam toolbox. The heat transfer process modeling in bimetallic tubes is associated with solving the problem of determining the value of the contact thermal resistance at the metal / metal interface. Considered design of a bimetallic tube involves crimping copper washers on the surface of an aluminum cylindrical tube. Hence, the contact surface of the tube is not isotropic in its properties. A mathematical model of conjugate heat transfer for air / bimetal / coolant medium is proposed. The features of the organization of thermophysical processes at the metal contact interface and at the metal / air and metal / coolant medium are shown. A qualitative comparison of the obtained results with the famous experimental data is carried out. Generalized temperature profiles in the rib longitudinal section are obtained by mathematical modeling. The given distributions of temperature and heat flux make it possible to estimate the contribution of each individual rib to the investigated heat removal process from the air environment. The efficiency of the considered technology of manufacturing a bimetallic finned tube is shown.
ISSN 2220-6426 (Online)