In the paper, an abstract model of a distributed network containing only hosts and switches is developed; this model allows us to estimate some problems that need to be solved in such a network, taking into account, among other things, non-functional parameters. It is assumed that hosts offer packages of certain services, and messages (requests) between hosts are forwarded via intermediate nodes according to switching rules. A rule determines which neighboring nodes a message received by a node is forwarded to, depending on where the message came from and the parameter vector in its header. Accordingly, the configuration of nodes determines the set of paths from host to host along which packets will be forwarded. The situation is modeled using a graph of physical connections, the vertices of which are hosts and switches, and the edges correspond to the physical connections between them. It is usually assumed that only hosts in such a network receive, process, and send information to other hosts, but do not switch messages. This function is assigned to other nodes, switches. However, we assume that with modern technologies, in some cases a host can perform the switching functions, i.e. such a host (like a switch) contains a system of switching rules that specify where the received message is sent if for some reason the host cannot process the obtained request/message. A network model is proposed in which the message switching function is performed not only by each switch, but also by each host. The paper discusses the problems associated with non-functional parameters of a distributed network, namely, reachability/unreachability of hosts, message looping, network overload with messages, and non-scalability. Optimization issues of the network parameters are discussed based on the set of services provided by each host. In addition, we discuss self-tuning algorithms for a distributed network that optimize network parameters, message transmission over the configured network and incremental (repeated partial) self-tuning algorithms when changing network parameters, in particular, its topology that does not disrupt network operation.
In the modern era of highly interconnectedness, data and information are constantly transmitted over networks. Ensuring the security of confidential information and protecting computer systems from network threats has become very important. Therefore, it is important to develop an effective network intrusion detection system (NIDS) using optimal features. These optimal features can be identified through computational intelligence by learning patterns and relationships among features using machine learning techniques. This paper presents a Rabbit and Tortoise optimization technique for selecting optimal features. For evaluation, the UNSW-NB15 dataset is utilized. The optimization results achieve an accuracy of 94.12% for binary classification and 93.92% for multi-class classification, with 26 optimal features selected from the entire feature set. To improve the approach, an adaptive strategy based on mutual information is used to control the number of optimal features. This strategy, together with the Rabbit and Tortoise algorithm, improves the accuracy, showing 94.69% for binary classification and 94.03% for multi-class classification, while reducing the number of selected features to 9 only. The comparative performance analysis shows that the proposed feature selection method outperforms other state-of-the-art methods, providing more accurate and reliable results in identifying cyber threats. In addition, the relationship plot between the number of optimal features and the accuracy of the model shows that selecting only 9 features is effective in achieving high accuracy in detecting and predicting cyber-attacks.
This work is devoted to the development of mixed methods for analyzing inaccuracies in compiler optimizations. The development of these methods is important for “Elbrus” microprocessors with a very long instruction word (VLIW) because of static scheduling. The article analyzes existing approaches to identifying inaccuracies in optimizations and highlights their disadvantages. In this article a method is developed for detecting inaccuracies in the work of two important for VLIW optimizations: software pipeline with hardware support (overlap) and optimization of moving unlikely executed code into new outer loop (nesting). The method is implemented by instrumenting loops in the user program and obtaining static information about loops from the compiler. The method was checked on SPEC CPU 2006 and 2017 rate suites on a computer with an Elbrus-8S processor and has proven its effectiveness. The method allows to achieve a speedup of 70.7% on test 523.xalancbmk with placing hints for overlap optimization and 4.71% on test 520.omnetpp with placing hints for nesting optimization. Tests are done in base mode without profile information.
Detection of dead code (i.e. the code which is executed, but does not affect an observable program behavior) is commonly used by compilers as a part of optimization techniques for redundant code elimination. At the same time dead function calls might be seen as a kind of program source code defects, which may point to serious program logic faults. We describe a new detector for this kind of issues developed as a part of Svace static defect detection tool, as well as the specific cases, which should be filtered out for practical detection of dead functions calls as program errors in contrast to their formal definition.
The development of programmable and resource-efficient hardware accelerators for regular expression matching is an important research direction in network security, where high throughput for streaming data processing and resilience against ReDoS (Regular Expression Denial of Service) attacks are critical. This paper presents the HOREC compiler, which is used in the high-level design loop of a programmable hardware accelerator. HOREC utilizes a novel extension of a deterministic finite automaton, enabling compact representation of interval quantifiers with a large number of repetitions, typical for rules in intrusion detection systems. Algorithms are described to reduce the number of transitions in instructions and decrease the total number of instructions. The paper presents a software model of the accelerator based on an interpreter for matching compiled patterns and defines a set of parameters for architectural design space exploration of the hardware accelerator, including available memory size, instruction format, and symbols matching modes. Experimental evaluation was performed on 7234 regular expressions extracted from ET OPEN rules. The results demonstrate the high resource efficiency of the proposed solution: up to 7000 expressions were accommodated in a 64K instruction memory. Furthermore, in 60% of cases, the number of transitions per instruction does not exceed 4, and the use of a global symbol set table for 256 frequently used elements allows each program’s local table to be limited to just 10 symbol sets. The presented results confirm HOREC's applicability for hardware implementation of regular expression matching in network security tasks and show the potential of the proposed approach for high-level design of low-hardware-cost accelerators.
Semantic code analysis is an important but time-consuming process used in many areas of programming. The purpose of this work is to study a method for automating the semantic analysis of binary code, which is based on dividing software into semantic kernels using partial traces of execution or subgraph extraction from call graph and highlighting their functionality.
Application migration is the process of moving software from one platform or API version to another. With rapid technological development and constant changes in user preferences, effective interface migration is becoming a necessity to maintain the competitiveness of applications. This article provides an overview of modern methods of application programming interface migration, focusing on the importance of adapting software to changing conditions and user requirements. The article also classifies existing approaches to migration, including the use of automated tools, adaptation and refactoring methods for code on object-oriented languages. The advantages and disadvantages of various methods, such as adapting user interfaces to new platforms, template-based migration, and using adapters to ensure compatibility between legacy and new interfaces are considered. The challenges faced by developers during migration are discussed, including semantic transformation issues and the need to take into account the specifics of target platforms. This review will be useful for both researchers and practitioners working in the field of software development, providing knowledge about methods and approaches to successful migration of application programming interfaces.
The article considers the development of a flexible role system configuration mechanism for applications that require dynamic differentiation of user rights depending on the business context or various conditions. Modern applications are becoming more and more complex, which leads to the need to implement effective access control mechanisms. The described approach involves the use of its own role model, which will allow you to fine-tune user rights for various entities in the database. The focus is on the flexibility and scalability of the model, which allows you to customize access depending on the statuses of objects, user roles and business needs. As an example, the configuration of an online store is considered, with differentiation by user roles and various product statuses. The access configuration process is also described in detail and an example code is provided for checking permissions and dynamically displaying fields. In conclusion, the advantages of this mechanism and its disadvantages are discussed.
Every business organization has a subset of data which must be highly consistent: legal information, supplier and contractual data, customer base, etc. Customers and employees expect to receive the same information about the same data object from different organization sources, which are usually other information systems. The process of consolidation and centralized control of such data throughout the organization is called Master Data Management (MDM). The iterative deployment strategy is a popular way to introduce MDM to a organization that supposes a step-by-step implementation of MDM components based on the real needs of the organization. In this paper, we present a functional MDM model for the early stages of MDM implementation within the iterative deployment strategy. The purpose of this model is to represent real business needs of an organization in terms of MDM, making clear which MDM components should be implemented, and which should not. Detailed description of the model components is provided. Also, a case study, presenting a portfolio of six real MDM projects analyzed from the viewpoint of the proposed model is performed.
The MaxSMT problem is to determine the satisfiability of a first-order theory formula with constraints. The paper presents an approach to solving this problem for the case of quantifier-free theories, which are usually decidable and widely applicable in practice for software test generation. The approach includes a modified MaxSAT algorithm PrimalDualMaxRes, in which the use of a SAT solver is replaced by an SMT solver, as well as the implementation of the portfolio mode. The latter means that the final MaxSMT algorithm is run in parallel with several MaxSMT solvers, and the best one is taken as the result. Z3, Yices and Bitwuzla were chosen as the SMT solvers on which the portfolio mode is built. The approach is implemented using the open-source Kotlin library KSMT, which implements the infrastructure for working with a set of MaxSMT solvers. The paper also presents the first benchmark for the MaxSMT problem. The evaluation demonstrates the competitiveness of the proposed solution. The developed portfolio of MaxSMT solvers outperforms the existing MaxSMT solver νZ (Z3 project), solving instances from the benchmark more than four times faster.
This paper presents a novel UAV-based system for real-time 3D-object localization that integrates a monocular camera, a gimbaled laser rangefinder, and an onboard computer vision. Unlike prior methods that rely on assumptions such as known object size, flat terrain, or simulation-only validation, our approach enables accurate localization of targets without requiring prior knowledge of the environment. The system performs real-time tracking and localization entirely onboard the drone through active sensor fusion and gimbal control. We implemented the method in a universal software framework and validated it through field experiments, demonstrating its accuracy and robustness in real-world conditions.
The article describes the nominal inflexion systems of the language used in the pamphlet dated 1804 “Translation of some prayers and the shorter catechism in the Korelian language” – the first printed piece of the Tver Karelian written language heritage. This study was necessitated by the lack of a previous systematic linguistic analysis of this source, which contains a unique corpus of data on the Karelian historical grammar and dialectology. The Karelian text is made of around 2,500 lexemes, more than a half of them represented by nominal parts of speech. The analysis made in this article covers all grammatical categories of nominals: number, case, possessiveness. The study was conducted using the tools of the LingvoDoc linguistic platform. The material found in this source was compared against the new-script variant of the Tver Karelian language and with data on mid-20th century Tver Karelian dialectal variants from the Murreh dialectal database.
The nominal inflexion system in the translated text has no substantial differences from the Tolmachyovo sub-dialect group of the southern dialect of Karelian Proper, supporting the dialectal attribution of the source made previously. One of the findings of this study is that the language of this source preserves some archaic traits that had been lost from modern Tver Karelian sub-dialects. Such archaisms include the use of the plurality indicator -й- with polysyllabic single-stem nominals ending in a diphthong, endings -енъ in the genitive plural, -ѧ in the partitive plural, -же in the illative, -ла / -лѧ in the adessive-allative, -са / -сѧ in the inessive, -тѧ in the abessive, possessive suffixes, and the use of postpositional constructs to render the commitative and approximative meanings. A novel feature is the expansion of the plurality formant -лой- / -лїой- to single-stem nominals with an -e vowel stem and a trend to abandon possessive suffixes for a wider use of genitive constructs.
This work implements a field-programmable gate array (FPGA) architecture for an intelligent battery management unit (i-BMU) designed for an electric vehicle (EV) and investigates how it can increase the mileage of the same. The unique feature of the suggested i-BMU is the development of an FPGA architecture to allocate power to various electrical and electronic components of an EV by considering the run-time driving pattern and the state of charge (SoC) of the Li-ion battery. The proposed methodology involves dynamically estimating the SoC of the Li-ion battery using a Long Short-Term Memory Neural Network (LSTM-NN) model while the vehicle is in motion, predicting different driving cycles such as urban, highway, and downhill in real-time using a regression tree algorithm, and intelligently allocating electric power to various EV components based on the predicted driving cycle and the estimated SoC using a proposed power distribution algorithm. The proposed system is designed on the Zynq Ultrascale+ MPSoC development board, and the data given to the system for verification is collected through simulation of various sensor values for an electric bike, such as speed, throttle position, battery voltage, battery current, and GPS coordinates, by generating random data within typical operational ranges. The proposed system is compared with the existing system in terms of chip power consumption (W), area of the chip (mm²), computation time (μs), and throughput. Additionally, the suggested method evaluates the mileage of the EVs and extends their range by 17 km to 36 km depending on the driving pattern.
Automation of routine operations related to medical image analysis is an important task, as it reduces the workload of radiologists. The selection of computed tomography images corresponding to the levels of specific vertebrae for assessing the patient's body composition is usually done manually, which requires additional time. The purpose is to develop an approach to solving the problem of vertebrae localization on midsagittal computed tomography slices for automatic selection of axial slices used to assess body composition. We developed an approach based on the use of a multiclass segmentation model with the U-Net family architecture and computer vision methods for images preprocessing and segmentation masks postprocessing. In order to assess the impact of input data types and model architectures on segmentation accuracy, we considered 20 approach configurations. We found that the proposed method of preprocessing input data, based on the formation of three-channel images, increases the accuracy of multiclass segmentation for four architectures out of five considered (Dense U-Net demonstrates the maximum Dice similarity coefficient of 0.8858). We also found that the proposed training set augmentation method based on skipping axial slices when forming sagittal slices improves the multiclass segmentation accuracy for models with the ResU-Net and Dense U-Net architectures. Based on the proposed approach, we implemented a software module that solves the problems of automatic determination of the positions of the cervical, thoracic and lumbar vertebrae on the midsagittal computed tomography slice, their visualization and determination of the axial slice indices corresponding to the vertebral body centers. We integrated the developed module with the program for visualization and analysis of DICOM medical files. The developed module can be used as an auxiliary tool in solving diagnostic problems.
ISSN 2220-6426 (Online)





