The data transfer speed was measured for both cases with results being about 99% from maximum speed for PCIe x4 Gen 2.0 link for the direct transfer between FPGAs (1603 MB/s for 128 bytes payload and 1740 MB/s for 256 bytes payload). The direct data transfer latency was also measured to be 0,7 us for one intermediate PCIe switch and 1 us for three intermediate switches.
Also the effect of simultaneous transfers on data transfer speed was studied with the result that, as long as the aggregate transfer speed does not overcome the shared link bandwidth, each transfer is performed on its maximum speed; after that the shared link utilization reaches 100% with its bandwidth being distributed equally between individual transfers.
Also, the technique for converting dynamically collected profile to LLVM profile format for statically compiled applications is presented. This paper evaluates both existing optimizations modified for using the collected profile and the newly developed ones. Implemented profile based inlining, outlining, speculative devirtualization. Also implemented annotation of control flow graph by profile weights to make profile based optimizations easy for debugging. Proposed the technique which allows using the prefetch instructions to improve the effectiveness of array processing codes. This paper describes the approach for building a specific cloud application storage that allows solving program optimization and protection problems. Finally, we present the approach for reducing compilation and optimization overhead in the cloud infrastructure.
As deobfuscation methods require analysis and transformation algorithms similar to those of an optimizing compiler, we have evaluated using LLVM compiler infrastructure as a basis for deobfuscation software. The difference from the compiler is that the deobfuscation algorithms do not have the full information about the program being analyzed, but rather a small part of it. The evaluation results show that using LLVM directly does not remove all the artifacts from the obfuscated code, so to provide the cleaner output it is desirable to develop an independent tool. Nevertheless, using LLVM or similar compiler infrastructure is the feasible approach for developing deobfuscation software.
UniTESK (UNIfied TEsting and Specification toolKit) is a testing technology based on formal models (or specifications) of requirements to behavior of software or hardware components. It was created with experience gained during development of the framework for automated testing of a real-time operating system kernel during 1994-2000. Software contracts were the main kind of models in the first version of the technology. Automata models were also implemented to support generation of complicated test sequences. UniTESK was first used in 2000 in the development of the test suite for IPv6 implementation. It was the first experience of using contract specifications in testing of implementations of telecommunication protocols. This project demonstrated that contract specifications in combination with the technique of testing software systems with asynchronous interface developed within UniTESK are very effective. Applications of UniTESK to testing software components in Java include the project on testing of implementation of standard library for Java runtime support and the project on testing of infrastructure of information system for one of large mobile operators in Russia. The most significant application of UniTESK happened in 2005-2007 in the Open Linux Verification project. Formal specifications and tests for interfaces of the Linux Standard Base were created during the project. These results then were used in development of the test suite for Russian avionics real-time operating system. Positive lessons learned during development and using UniTESK include effective application of model based testing methods in large industrial projects, high level of test coverage achieved by the generated tests, applicability of model based testing to critical systems and applications. Negative lessons include the lack of well-defined and detailed models and specifications, extra development expenses caused by creation of test specific models, problems with introduction of model based testing technologies using bilingual test generation systems and specific notations.
Modern IT systems become more and more distributed, both in literal sense - as business logic spreads across applications running in several different network domains, and in a metaphorical sense - as expert knowledge about system's inner working spreads across different outsource companies and hundreds of documents. In this paper we are discussing regression testing as one of the issues related to that trend. When it comes to testing, quality of specifications is a major issue. They are either unmanageably big and outdated, or very much fragmented - each specifying its own module, but lacking perspective on how system works as a whole. In our practice, it often comes down to reverse engineering to discover what exactly different parts of the system are expecting from each other, but such ad hoc techniques are not welcome in the domain of testing. Different approach relies on analyzing system's events trace that is provided either by business activity monitoring tools, transaction logs or, simply, log files. There is a lot of different data mining techniques already developed; we are mostly interested in process mining as a way of discovering system's decision model. A key factor of any process mining application is the algorithm of discovering events relation. It can be either generic algorithm like Alpha-algorithm or a specific algorithm tailored for a particular set of possible events. In this paper we provide an outline for a specific algorithm that we use to discover events relation in a special environment. The environment we are using is specifically designed for the goal and has central part in the process. It is used for mediating interactions happening between system's modules in run-time as a discrete process thus providing us with a very straightforward way of determining events causality.
This paper is dedicated to review of recent methods for indexing of multidimensional data and their use for modeling of large-scale dynamic scenes. The problem arises in many application domains such as computer graphics systems, virtual and augmented reality systems, CAD systems, robotics, geographical information systems, project management systems, etc. Two fundamental approaches to the indexing of multidimensional data, namely object aggregation and spatial decomposition, have been outlined. In the context of the former approach balanced search trees, also referred as object pyramids, have been discussed. In particular, generalizations of B-trees such as R-trees, R*-trees, R+-trees have been discussed in conformity to indexing of large data located in external memory and accessible by pages. The conducted analysis shows that object aggregation methods are well suited for static scenes but very limited for dynamic environments. The latter approach assumes the recursive decomposition of scene volume in accordance with the cutting rules. Space decomposition methods are octrees, A-trees, bintrees, K-D-trees, X-Y-trees, treemaps and puzzle-trees. They support more reasonable compromise between performance of spatial queries and expenses required to update indexing structures and to keep their in concordant state under permanent changes in the scenes. Compared to object pyramids, these methods look more promising for dramatically changed environments. It is concluded that regular dynamic octrees are most effective for the considered applications of modeling of large-scale dynamic scenes.
The paper analyses a behavior of the SQL standard’s table expressions in comparison with relational equivalents. The concept of relation of relational model presumes nonexistence of duplicate tuples, attributes of the same name, ordering of attributes, ordering of tuples, no-typed NULLs. The paper demonstrate cases where behavior of constructions of the SQL-standard violates the rules of the relational model: these constructions produce tables that are not relations (join expressions, table references, set-theoretical operators, simple tables, and select). The paper also describes practices that allow to avoid such cases and to use SQL in truly relational manner. Relational use of SQL is possible nearly always. However, existing implementations of SQL are far from ideal, and sometimes such use leads a poor performance. Non-relational use of SQL means a tradeoff between performance and conceptual consistence. It is always useful to understand a theoretically perfect variant of use and to have solid reasons to abandon it. It is also useful to document these reasons to return to the perfect variant with some new version of SQL-based DBMS that does not demonstrates problems of performance.
ISSN 2220-6426 (Online)