Security development lifecycle (SDL) is applied to improve software reliability and security. It extends program lifecycle with additional testing of security properties. Among other things, fuzz testing is widely used, which allows one to detect crashes and hangs of the analyzed code. The hybrid approach that combines fuzzing and dynamic symbolic execution showed even greater efficiency than classical fuzzing. Moreover, symbolic execution empowers one to add additional runtime checks called security predicates that detect memory errors and undefined behavior. This article explores the properties of the path predicate slicing algorithm that eliminates redundant constraints from a path predicate without accuracy loss. The article proves that the algorithm is finite and does not lose solutions. Moreover, the algorithm asymptotic complexity is estimated.
The paper presents an overview of the type system, which supports the convergence of procedural, object-oriented, functional, and concurrent programming paradigms relying on static type checking with smart type inference support and the ability to ensure dynamic type safety as well. The key element of the type system is that it is fully based on just 2 basis constants and all other constructions are derived.
A .NET developer occasionally needs to compare compiled programs or assemblies, e.g., when updating versions of third-party libraries or when working with their own binary files. However, the existing tools have some significant drawbacks, for example they don't support comparison of .NET Core assemblies. In this paper we reviewed different types of .NET assemblies and, taking into account their structure, developed and integrated into Rider IDE our own Assembly Diff tool which considers the disadvantages of the existing tools and expands the comparison possibilities. We presented several variants of comparison tool presentation and implementation and chose the most functional one in the form of a comparison tree, for which we developed and described special algorithms allowing to take into account semantic features of .NET types.
Geometry simplification for the radiosity method is a laborious process and it is difficult to automate in the general case. As an alternative solution to this problem, this paper proposes a modification of the radiosity method using approximation called “virtual patches”. Virtual patches are elements of the geometry obtained by clustering some points of the original geometry for which the illumination is calculated. They have a normal, color and area, but do not have a geometric representation, representing a cloud of points inside the voxel. In comparison with the original radiosity method, the proposed method, without reducing the performance of calculating global illumination, increases its accuracy.
Convolutional Neural Networks (CNN) show high accuracy in pattern recognition solving problem but have high computational complexity, which leads to slow data processing. To increase the speed of CNN, we propose a hardware implementation method with calculations in the residue number system with moduli of a special type and . A hardware simulation of the proposed method on Field-Programmable Gate Array for LeNet-5 CNN is trained with the MNIST, FMNIST, and CIFAR-10 image databases. It has shown that the proposed approach can increase the clock frequency and performance of the device by 11%-12%, compared with the traditional approach based on the positional number system.
This article presents the design of the modified error detection and localization algorithm in the Residue Number System (RNS). Classical redundant RNS with one control modulus can detect one error but not localize it. Two control moduli are used to localize a single error. Presented algorithm can achieve an error correction with a single control modulus transmitted over a reliable communication channel. The proposed approach was verified using Verilog on ASIC in RTL and physical synthesis tool Cadence Genus Synthesis Solution. It significantly reduces the area of the hardware implementation increasing the packing density and more efficient use of silicon resource. It slightly increases the running time compared with the classical algorithm. Distributed data storage was developed to study efficiency of the proposed algorithm.
The learning customization and monitoring are considered key aspects of the teaching-learning processes. Some works have proposed mobile learning systems that provide teachers and students learning monitoring and personalization services. One of the main requirements of these kinds of systems in terms of software quality is usability; however, few works have addressed the usability issues using laboratory studies with users in real domains. In this work, we present a usability evaluation of the learning monitoring and personalization services of a mobile learning platform based on a laboratory study in which nine teachers and ten students participated. In our usability evaluation, the aspects evaluated were effectiveness, efficiency, and level of user satisfaction as proposed by the ISO/IEC 25000 family of standards. The results show that the teachers presented effectiveness, efficiency, and satisfaction considered satisfactory, while the students presented effectiveness and satisfaction classified as satisfactory and acceptable efficiency. The usability evaluation described in this work can serve as a reference for developers seeking to improve learning monitoring and personalization services development.
Technologies that support co-located collaboration must not only provide a shared workspace, but also support collaboration. From an observational study, some collaboration problems were identified in groups of people working in a system with a Tangible User Interface. Some of these problems could be identified and prevented with the support of Coaching System. This system encourages interactions between group members through Social Interventions. To develop a Coaching System, it is necessary to know the cohesion between the members of the group, in order to decide the appropriate Social Interventions. In this paper, a model is proposed to represent the social interactions that occur in a group of people when performing a task. Interactions can be analyzed to determine the degree of cohesiveness of a group and support the collaboration.
This paper reviews the literature on automatic code generation of user-centered serious games. We decided to break the study in two parts: one study about serious games with model driven engineering, and another study about user-centered serious games. This paper presents an extension of a paper presented at CONISOFT 20 where a systematic review of 5 years old at the time of writing was presented exclusively. The systematic literature review conducted in this paper covers a decade of information from January 2012 to June 2022. The main objective is to know the literature that helps to mitigate the costs and time of software development in serious games. The overall conclusion is that there is still work to be done to combine serious user-centered games and automatic generation. This paper is a systematic review that identifies relevant publications and provides an overview of research areas and publication venues. In addition, Research perspectives were classified according to common objectives, techniques, and approaches. Finally, is presented point out challenges and opportunities for future research and development.
Brain Computer Interfaces – BCI allow users to communicate with the software system through cognitive functions measurable by brain signals, identified as Electroencephalography – EEG. User tests have been the most used method for usability evaluation of BCI software applications. In user tests, the data collected comes from the opinions of users through questionnaires, these tests require a lot of time, since they include not only performing interaction task and the application of the questionnaires, but also include placing and calibrating the EEG device. All this makes the evaluation process a very heavy task for the participants of the test and can mean that the data collected is not entirely reliable. That is why we are interested in including EEG signals in the usability evaluation process of applications with BCI software applications. Therefore, we present in this paper the result of the analysis of state of art in order to identify the relevant works in the area and future lines of research.
The paper discusses the execution of a program of tasks on the SDN data plane, modeled by a finite connected undirected graph of physical connections; the execution is understood in the sense of the object-oriented programming paradigm as consisting of objects and messages that objects can exchange. Objects are implemented in hosts. Several different objects can be implemented in one host and the same object can be implemented in several hosts. Messages between objects implemented in different hosts are transmitted in packets which are routed by switches based on identifiers assigned to packets that is on a set of values of some packet parameters in the packet header. Two problems are tackled in the work: 1) minimizing the number of identifiers, 2) setting up switches to implement the paths that packets should take place. These tasks are solved in two cases: A) a packet intended for some object must get into exactly one host in which this object is implemented, B) a packet can get into several hosts, but the desired object must be implemented in one and only one of them. It is shown that problem 1 in case A is equivalent to the set covering problem, and the minimum number of identifiers in the worst case is min{ n, m } where n is the number of objects, and m is the number of hosts implementing objects. In case B, the problem is a special modification of the set covering problem, the hypothesis is proposed that the minimum number of identifiers in the worst case is min{ ëlb(n + 1)û, m }. So far, an upper bound is O( min { ln (min { n, m }) × ln ( n, m ) } ). To solve problem 2 in cases A and B, algorithms for switches’ setting are proposed which have the complexityO( m ) and O( k m ), , respectively, where m is the number of the edges of the graph of physical connections and k is the number of the required packet identifiers.
The development of cloud computing, including the storage and processing of confidential user data on servers that can be attacked, puts forward new requirements for information protection. The article explores the problem of obtaining information from the database by the client in such a way that no one except the client himself get any information about the information the client is interested in (PIR - Private Information Retrieval). The problem was introduced in 1995 by Chor, Goldreich, Kushilevitz and Sudan in the information-theoretic setting. A model of cloud computing is proposed. It includes a cloud, a user, clients, a trusted dealer, a passive adversary in the cloud. Also, the attacking side has the ability to create fake clients to generate an unlimited number of requests. An algorithm for the organization and database distribution on the cloud and an algorithm for obtaining the required bit were proposed. Communication complexity of the algorithm was estimated. The probability of revealing required bit's number in the case when fake clients perform unlimited requests was estimated too.
ISSN 2220-6426 (Online)