This special issue contains selected papers that had been submitted to Proceedings of the Institute for System Programming of the Russian Academy of Sciences. Thirteen submissions from nine countries (England, Mexico, China, Uruguay, Spain, Pakistan, Cuba, Dominican Republic, and Russia) cover several important topics in rapidly expanding area of research and development related with Advanced Computing. Authors show a spectrum of approaches to solve complex problems such as: data-oriented scheduling, scientific workflows, cloud computing, evolutionary algorithms, content distribution networks, soft computing, parallel programming model for multicore machines, high performance computing, data mining, software birth-marking, anomaly detection, swarm robotics, neural networks, machine learning, security, secret-sharing schemes, heterogeneous distributed computing, and Internet of Things.
This article presents the application of soft computing methods for solving the problem of designing and optimizing cloud-based Content Distribution Networks (CDN). A multi-objective approach is applied to solve the resource provisioning problem for building the infrastructure for the network, considering the objectives of minimizing the cost of the virtual machines, network, and storage, and the maximization of the quality-of-service provided to end-users. A specific brokering model is proposed to allow a single cloud-based CDN to be able to host multiple content providers applying a resource sharing strategy. Following the proposed brokering model, three multiobjective evolutionary approaches are studied for the offline optimization of resource provisioning and a greedy heuristic method is proposed for addressing the online routing of contents. The experimental evaluation of the proposed approach is performed over a set of realistic problem instances. The obtained experimental results indicate that the proposed approach is effective for designing and optimizing cloud-based Content Distribution Networks: total costs are reduced by up to 10.34% while maintaining high quality-of-service values.
This article presents the application of Virtual Savant to solve resource allocation problems, a widely-studied area with several real-world applications. Virtual Savant is a novel soft computing method that uses machine learning techniques to compute solutions to a given optimization problem. Virtual Savant aims at learning how to solve a given problem from the solutions computed by a reference algorithm, and its design allows taking advantage of modern parallel computing infrastructures. The proposed approach is evaluated to solve the Knapsack Problem, which models different variant of resource allocation problems, considering a set of instances with varying size and difficulty. The experimental analysis is performed on an Intel Xeon Phi many-core server. Results indicate that Virtual Savant is able to compute accurate solutions while showing good scalability properties when increasing the number of computing resources used.
The article deals with the search for the global extremum in the training of artificial neural networks using the correlation index. The proposed method is based on a mathematical model of an artificial neural network, represented as an information transmission system. The efficiency of the proposed model is confirmed by its broad application in information transmission systems for analyzing and recovering the useful signal against the background of various interferences: Gaussian, concentrated, pulsed, etc. The analysis of the convergence of training and experimentally obtained sequences based on the correlation index is carried out. The possibility of estimating the convergence of the training and experimentally obtained sequences by the cross-correlation function as a measure of their energy similarity (difference) is confirmed. To evaluate the proposed method, a comparative analysis is carried out with the currently used target indicators. Possible sources of errors of the least squares method and the possibility of the proposed index to overcome them are investigated.
802.11 wireless local area networks (WLANs) can support multiple data rates at physical layer by using adaptive modulation and coding (AMC) scheme. However, this differential data rate capability introduces a serious performance anomaly in WLANs. In a network comprising of several nodes with varying transmission rates, nodes with lower data rate (slow nodes) degrade the throughput of nodes with higher transmission rates (fast nodes). The primary source of this anomaly is the channel access mechanism of WLANs which ensures long term equal channel access probability to all nodes irrespective of their transmission rates. In this work, we investigate the use of adaptable width channelization to minimize the effect of this absurdity in performance. It has been observed that surplus channel-width due to lower transmission rate of slow nodes can be assigned to fast nodes connected to other access points (APs) which can substantially increase the overall throughput of the whole network. We propose a medium access control (MAC) layer independent anomaly prevention (MIAP) algorithm that assigns channel-width to nodes connected with different APs based on their transmission rate. We have modeled the effect of adaptable channelization and provide lower and upper bounds for throughput in various network scenarios. Our empirical results indicate a possible increase in network throughput by more than 20% on employing the proposed MIAP algorithm.
Nowadays artificial intelligence and swarm robotics become wide spread and take their approach in civil tasks. The main purpose of the article is to show the influence of common knowledge about surroundings sharing in the robotic group navigation problem by implementing the data transferring within the group. Methodology provided in article reviews a set of tasks implementation of which improves the results of robotic group navigation. The main questions for the research are the problems of robotics vision, path planning, data storing and data exchange. Article describes the structure of real-time laser technical vision system as the main environment-sensing tool for robots. The vision system uses dynamic triangulation principle. Article provides examples of obtained data, distance-based methods for resolution and speed control. According to the data obtained by provided vision system were decided to use matrix-based approach for robots path planning, it inflows the tasks of surroundings discretization, and trajectory approximation. Two network structure types for data transferring are compared. Authors are proposing a methodology for dynamic network forming based on leader changing system. For the confirmation of theory were developed an application of robotic group modeling. Obtained results show that common knowledge sharing between robots in-group can significantly decrease individual trajectories length.
We propose a new approach to solving important practical problems of complex debugging, joint testing, and analysis of the execution time of software module versions in a heterogeneous distributed computing environment that integrating Grid and cloud computing. These problems arise in the process of supporting the continuous integration of modules of distributed applied software packages. The study focuses on the packages that are used to conduct large-scale computational experiments. The scientific novelty of the proposed approach is to combine the methodology for creating the packages with modern software development practices based on its continuous integration using knowledge about the specifics of the problems being solved. Our contribution is multifold. We expanded the capabilities of continuous integration tools by developing new additional tools for the markup and transformation of data from poorly structured sources and predicting modules execution time. In addition, we developed a technological scheme of the joint applying our developed tools and external systems for continuous integration. Therefore, we provide a more large range of capabilities of continuous integration in relation to the processes of creating and using the packages in comparison with the well-known tools. The fundamental basis of their functioning is a new conceptual model of the packages. This model supports the specification, planning, and execution of software continuous integration processes taking into account the specific subject data and problems being solved. Applying the developed tools in practice leads to a decrease in the number of errors and failures of applied software in the development and use of the packages. In turn, such decrease significantly reduces the time for large-scale computational experiments and increases the efficiency of using resources of the environment. The results of practical experiments on the use of system prototype for continuous integration of applied software show their high efficiency.
Cloud computing is one of the most prominent parallel and distributed computing paradigm. It is used for providing solution to a huge number of scientific and business applications. Large scale scientific applications which are structured as scientific workflows are evaluated through cloud computing. Scientific workflows are data-intensive applications, as a single scientific workflow may consist of hundred thousands of tasks. Task failures, deadline constraints, budget constraints and improper management of tasks can also instigate inconvenience. Therefore, provision of fault-tolerant techniques with data-oriented scheduling is an important approach for execution of scientific workflows in Cloud computing. Accordingly, we have presented enhanced data-oriented scheduling with Dynamic-clustering fault-tolerant technique (EDS-DC) for execution of scientific workflows in Cloud computing. We have presented data-oriented scheduling as a proposed scheduling technique. We have also equipped EDS-DC with Dynamic-clustering fault-tolerant technique. To know the effectiveness of EDS-DC, we compared its results with three well-known enhanced heuristic scheduling policies referred to as: (a) MCT-DC, (b) Max-min-DC, and (c) Min-min-DC. We considered scientific workflow of CyberShake as a case study, because it contains most of the characteristics of scientific workflows such as integration, disintegration, parallelism, and pipelining. The results show that EDS-DC reduced make-span of 10.9% as compared to MCT-DC, 13.7% as compared to Max-min-DC, and 6.4% as compared to Min-min-DC scheduling policies. Similarly, EDS-DC reduced the cost of 4% as compared to MCT-DC, 5.6% as compared to Max-min-DC, and 1.5% as compared to Min-min-DC scheduling policies. These results in respect of make-span and cost are highly significant for EDS-DC as compared with above referred three scheduling policies. The SLA is not violated for EDS-DC in respect of time and cost constraints, while it is violated number of times for MCT-DC, Max-min-DC, and Min-min-DC scheduling techniques.
In this paper, we give an overview of the movement, foraging and feeding ecology as well as sensors technologies that could be embedded into an IoT-based platform for Precision Livestock Farming (PLF). A total of 43 peer-reviewed journal papers indexed by Web of Science were surveyed. Firstly, sensors technologies (e.g., RFID, GPS, or Accelerometer) used by the authors of each paper were identified. Then, papers were classified according to their applicability to ecological studies in the fields of foraging and feeding behavior.
Mobile Ad-Hoc Networks (MANET) require special approaches to the design and selection of data transmission and security algorithms Nodes mobility and dynamic topology give rise to two key problems of MANET – the difficulty of ensuring confidentiality when transmitting data through a network and the complexity of organizing reliable data transfer. This paper proposes a new approach to organizing data transfer through MANET, based on node disjoint multipath routing and modular coding of data. Distributed modular coding allows the use of secret-sharing schemes to ensure confidentiality on the one hand and reliable coding on the other hand. In this paper, a Computationally Secure Secret Sharing Scheme based on the Residue Number System is used, which ensures the confidentiality of data and the reliability of their transmission. Such an approach also allows for balancing the network loading.
ISSN 2220-6426 (Online)