Modern realistic computer graphics are based on light transport simulation. In this case, one of the main and difficult to calculate tasks is to calculate the global illumination, i.e. distribution of light in a virtual scene, taking into account multiple reflections and scattering of light and all kinds of its interaction with objects in the scene. Hundreds of publications and describing dozens of methods are devoted to this problem. In this state-of-the-art review, we would like not only to list and briefly describe these methods, but also to give some “map” of existing works, which will allow the reader to navigate, understand their advantages and disadvantages, and, thereby, choose a right method for themselves. Particular attention is paid to such characteristics of the methods as robustness and universality in relation to the used mathematical models, the transparency of the method verification, the possibility of efficient implementation on the GPU, as well as restrictions imposed on the scene or illumination phenomena. In contrast to the existing survey papers, not only the efficiency of the methods is analyzed, but also their limitations and the complexity of software implementation. In addition, we provide the results of our own numerical experiments with various methods that serve as illustrations for the conclusions.
An acceptable level of quality in data is nowadays a paramount for any kind of organization or enterprise that wishes its business processes to prosper. Thus, introducing activities focused in the data quality management is a crucial requirement for the analysts if the level of quality of data for the functionality or service at hand is to be ensured. Such specialized data quality management activities should be presented as early as possible during the software development process. So far and having done a search for proposals in this field, there is still a lack of either methodological or technological proposals with which a developer could be able to design data quality aware applications in the specific field of Web application development. Considering the benefits offered in the field of Model Driven Web Engineering, this work presents a partial outcome of our research in this novel field: a metamodel and a UML profile, both able to be used as data quality artefacts during the design stage of Web applications. The main objective is to provide the designer with the tools needed to design Web applications, in order to prevent data quality issues.
This paper presents the automation of a Web advertising recognition algorithm, using regular expressions. Currently, the use of regular expressions, optical character recognition, Databases, and automation tests have been critical for multiple Software implementations. The tests were carried out in three Web browsers. As a result, the detection of advertisements in Spanish, that distract attention and that above all extract information from users was achieved. The main feature of the algorithm is that automatic and versatile execution does not require access to the code of the page in question and that in the future it can be an application with background operation. In addition, being supported by optical character recognition gives us acceptable efficiency in detecting advertising.
Currently, in Scrum, there are different methods to estimate user stories in terms of effort or complexity. Most of the existing techniques consider factors in a fine grain level; these techniques are not always accurate. Although Planning Poker is the most used method in Scrum to estimate user stories, it is primarily effective in experienced teams since the estimation mostly depends on the observation of experts, but it is difficult when is used by inexperienced teams. In this paper, we present a proposal for complexity decomposition in a coarse grain level, in order to consider important factors for complexity estimation. We use a Bayesian network to represent those factors and their relations. The edges of the network are weighted with the judge of professional practitioners about the importance of the factors. The nodes of the network represent the factors. During the user estimation phase, the Scrum team members introduce the values for each factor; in this way, the network generates a value for the complexity of a User story, which is transformed in a Planning Poker card number, which represents the story points. The purpose of this research is to provide to development teams without experience or without historical data, a method to estimate the complexity of user stories through a model focused on the human aspects of developers.
Type 2 Diabetes (T2DM) makes up about 90% of diabetes cases, as well as tough restriction on continuous monitoring and detecting become one of key aspects in T2DM. This research aims to develop an ensemble of several machine learning and deep learning models for early detection of T2DM with high accuracy. With high diversity of models, the ensemble will provide more excessive performance than single models. Methodology: The proposed system is modified enhanced ensemble of machine learning models for T2DM prediction. It is composed of Logistic Regression, Random Forest, SVM and Deep Neural Network models to generate a modified ensemble model. Results: The output of each model in the modified ensemble is used to figure out the final output of the system. The datasets being used for these models include Practice Fusion HER, Pima Indians diabetic's data, UCI AIM94 Dataset and CA Diabetes Prevalence 2014. In comparison to the previous solutions, the proposed ensemble model solution exposes the effectiveness of accuracy, sensitivity, and specificity. It provides an accuracy of 87.5% from 83.51% in average, sensitivity of 35.8% from 29.59% as well as specificity of 98.9% from 96.27%. The processing time of the proposed model solution with 96.6ms is faster than the state-of-the-art with 97.5ms. Conclusion: The proposed modified enhanced system in this work improves the overall prediction capability of T2DM using an ensemble of several machine learning and deep learning models. A majority voting scheme utilizes the output from several models to make the final accurate prediction. Regularization function in this work is modified in order to include the regularization of all the models in ensemble, that helps prevent the overfitting and encourages the generalization capacity of the proposed system.
Mental disorders like depression represent 28% of global disability, it affects around 7.5% percent of global disability. Depression is a common disorder that affects the state of mind, normal activities, emotions, and produces sleep disorders. It is estimated that approximately 50% of depressive patients suffering from sleep disturbances. In this paper, a data mining process to classify depressive and not depressive episodes during nighttime is carried out based on a formal method of data mining called Knowledge Discovery in Databases (KDD). KDD guides the process of data mining with stages well established: Pre-KDD, Selection, Pre-processing, Transformation, Data Mining, Evaluation, and Post-KDD. The dataset used for the classification is the DEPRESJON dataset, which contains the motor activity of 23 unipolar and bipolar depressed patients and 32 healthy controls. The classification is carried out with two different approaches; a multivariate and univariate analysis to classify depressive and non-depressive episodes. For the multivariate analysis, the Random Forest algorithm is implemented with a model construct of 8 features, the results of the classification are specificity equal to 0.9927 and sensitivity equal to 0.9991. The univariate analysis shows that the maximum of the activity is the most descriptive characteristic of the model with 0.908 in accuracy for the classification of depressive episodes.
This article presents demand response techniques for the participation of datacenters in smart electricity markets under the smart grid paradigm. The proposed approach includes a datacenter model based on empirical information to determine the power consumption of CPU-intensive and memory-intensive tasks. A negotiation approach between the datacenter and clients and a heuristic planning method for energy reduction optimization are proposed. The experimental evaluation is performed over realistic problem instances modeling different types of clients. Results indicate that the proposed approach is effective to provide appropriate demand response actions according to monetary incentives.
Advanced computing brings opportunities for innovation in a broad gamma of applications. Traditional practices based on visual and manual methods tend to be replaced by cyber-physical systems to automate processes. The present work introduces an example of this, a machine vision system research based on deep learning to classify bridge load, to give support to an optical scanning system for structural health monitoring tasks. The optical scanning system monitors the health of structures, such as buildings, warehouses, water dams, etc. by the measurement of their coordinates to identify if a coordinate displacement befalls that could indicate an anomaly in the structure that can be related to structural damage. The use of this optical scanning system to monitor the structural health of bridges is a little more complicated due to the vehicle's transit over the bridge that causes a vehicle-bridge interaction which manifests as a bridge oscillation. Under this scheme, the bridge oscillation corresponds to their coordinate’s displacement due to the vehicle-bridge interaction, but not necessarily due to bridge damage. So, a bridge load classifier is required to correlate the bridge coordinates measurements behavior with the bridge oscillation due to vehicle-bridge interaction to discriminate the normal behavior of the structure to abnormal behavior or identify tendencies that could indicate bridge deformation or discover if the bridge behavior due to loads is changing through the time.
This paper proposes a first approach based on wavelet analysis inside image processing for object detection with a repetitive pattern and binary classification in the image plane, in particular for navigation in simulated environments. To date, it has become common to use algorithms based on convolutional neural networks (CNNs) to process images obtained from the on-board camera of unmanned aerial vehicles (UAVs) in the spatial domain, being useful in detection and classification tasks. CNN architecture can receive images without pre-processing, as input in the training stage. This advantage allows us to extract the characteristic features of the image/ Nevertheless, in this work, we argue that characteristics at different frequencies, low and high, also affect the performance of CNN during training. Thus, we propose a CNN architecture complemented by the 2D discrete wavelet transform, which is a feature extraction method. The information improves the learning capacity, eliminates the overfitting, and achieves a better efficiency in the detection of a target.
This article presents a flow-based mixed integer programming formulation for the Quality of Service Multicast Tree problem. This is a relevant problem related to nowadays telecommunication networksto distribute multimedia over cloud-based Internet systems. To the best of our knowledge, no previous mixed integer programming formulation was proposed for Quality of Service Multicast Tree Problem. Experimental evaluation is performed over a set of realistic problem instances from SteinLib, to prove that standard exact solvers can find solutions to real-world size instances. Exact method is applied for benchmarking the proposed formulations, finding optimal solutions and low feasible-to-optimal gaps in reasonable execution times.
We introduce an overview of modern approaches to cloud confidential data processing. A significant part of data warehouse and data processing systems is based on cloud services. Users and organizations consider such services as a service provider. This approach allows users to take benefit from all of these technologies: they do not need to purchase, install and maintain expensive equipment, they can access the data and the calculation results from any device. Such data processing on cloud services carries certain risks because one of the participants of the protocol for securing access to cloud data storage may be an adversary. This leads to the threat of confidential information leakage. The above approaches are intended for databases in which information is stored in the encrypted form and they allow to work in the familiar paradigm of SQL queries. Despite the advantages such approach has some limitations. It is necessary to choose an encryption method and to maintain a balance between the reliability of encryption and the set of requests required by users. In the case if users are not limited by the framework of SQL queries, we propose another way of implementation of cloud computing over confidential data using free software. It is based on lambda architecture combined with certain restrictions on allowed deductively safe database queries.
The propositional modal μ-calculus is a well-known specification language for labeled transition systems. In this work, we study an extension of this logic with converse modalities and Presburger arithmetic constraints, interpreted over tree models. We describe a satisfiability algorithm based on breadth-first construction of Fischer-Lardner models. An implementation together several experiments are also reported. Furthermore, we also describe an application of the algorithm to solve static analysis problems over semi-structured data.
ISSN 2220-6426 (Online)