Today there are smart cities that, through the use of information technologies, sensors, and specialized infrastructure, focus their efforts on improving the quality of life of their inhabitants. From these efforts arose the need to analyze and represent data within a system to make it useful and understandable to people, for which dashboards emerge. The objective of these systems is to provide users with information to support decision-making, so it is essential to adapt the visualization of the information provided to their needs and preferences. However, the analysis of adaptability through user interaction and its benefits is a topic still under exploration. This paper analyzes the literature on information visualization in adaptable dashboards for smart cities. Based on the elements of adaptable dashboards identified in the literature review, we propose an adaptable dashboard architecture, identify the main characteristics of the users of a smart city dashboard, and build an adaptable dashboard prototype using user-centered techniques.
People who have dementia (PwD) experience deteriorating executive functions, in particular their working memory, and therefore find it hard to complete multistep tasks or activities of daily living. There is no doubt that during the pandemic, PwD and their caregivers were particularly vulnerable, often isolated which affected their mental and physical health. Their ability to live independently was hampered, fomenting depression in the PwD and burnout on informal caregivers. Information technology can support dementia care improving the quality of life of PwD and easing the burden on caregivers. There is an increasing demand to support informal caregivers and improve their well-being by making dementia challenges less severe. This study uses qualitative techniques to design a model with technological strategies based on semi-structured interviews applied to seven informal caregivers from two different countries. Based on these interviews we developed design insights for implementing solutions to help informal caregivers take care of their PwD at home using conversational agents. We hope that the findings presented in this study will help researchers, and developers design solutions that can support PwD and informal caregivers.
The coronavirus COVID-19 swept the world in early 2020, working from home was a necessity. In the software industry, thousands of software developers began working from home, many did so on short notice, under difficult and stressful conditions. The emotions of developers can be affected by this situation. On the other hand, some well-known soft skills have been emphasized as required for working remotely. Software engineering research lacks theory and methodologies for addressing human aspects in software development. In this paper, we present an exploratory study focused on the developers’ wellbeing during pandemic, expressed as emotions, and the perceptions of the level in which soft skills are practiced/required in the working from home mode. The results show that high percent expressed to experience positive emotions, however, a portion of respondents expressed to feel negative emotions. In the case of soft skills, some of them are revealed as practiced in high level in working from home, but still there is not consensus.
Modern software development requires agile methods to deploy and scale increasingly demanded distributed systems. Practitioners have adopted the microservices architecture to cope with the challenges posed by modern software demands. However, the adoption and deployment of this architecture also creates technical and organizational challenges, potentially slowing down the development and operation teams, which require more time and effort to implement a quality deployment process that allows them to constantly release new features to production. The adoption of a DevOps culture, along with its practices and tools, alleviates some of these new challenges. In this paper we propose a guide for the deployment of systems with a microservices architecture, considering the practices of a DevOps culture, providing practitioners with a base path to start implementing the necessary platform for this architecture. We conducted this work following the Design Science Research Methodology for Information Systems (DSRM). In this way, we identified the problem, and also defined the solution objectives through the execution of a Systematic Literature Mapping and a Gray Literature Review, having as a result the proposed guide. This work can be summarized as follows: (I) Identification of practices and technologies that support the deployment of microservices. (II) Identification of recommendations, challenges, and best practices for the deployment process. (III) Modeling of the microservices deployment process using SPEM. (IV) Integration of the knowledge in a guide to deploy microservices by adopting DevOps practices.
Software process has been studied from various perspectives, among them, the human factor is one of the most important due to the intrinsic social aspect of the discipline. This study aims to explore the benefits of using Belbin's role theory in tasks —team and individual— related to the software development process, particularly in Database Design (DB) Design. In this paper two controlled experiments with students are presented. In the first experiment integrated teams with compatible roles identified in the students and teams integrated through a traditional strategy were compared, during the task of DB conceptual design. In the second experiment, individual students were the experimental subjects, the performance of the Belbin roles identified in them were compared, in the task of the DB logical design. The dependent variables in both experiments were the effort in the task, and the quality of the generated design. Results in the first experiment did not show significant differences in both variables, a possible limitation was the complexity of the task. The second experiment also did not show significant differences in the effort variable; however, in the variable related to the quality of the logical design, the monitor-evaluator role presented significant differences when compared with the other six identified roles; these results are consistent with previous studies identified in the literature. We plan to continue experimenting with other tasks in order to get a deeper understanding of applying the Belbin’s theory in software process to accumulate experiences.
Scrum is one of many agile frameworks and is considered the most popular and widely adopted. Although Scrum presents several advantages, process and final product quality continue to be Scrum’s main challenges. The quality assessment should be an essential activity in the software development process. Several authors have attempted to improve the quality of Scrum projects, changing some aspects of the framework, such as including new quality practices, a quality role, and quality processes. However, the quantification of quality is still a challenge. For that reason, the authors proposed a framework called Scrumlity, which was defined in a previous study. This framework proposes a change to Scrum, including a quality role and some artifacts to evaluate quality through a complete execution of a Sprint. In this study, the authors add a User Story Quality assessment to the framework. The User Story Quality Assessment includes over 250 analyzed User Stories. Results obtained after this experiment indicate the importance of executing a User Story Quality Assessment and that Scrum Team members are willing to accept adding this to the framework.
Systems Thinking Competencies have become extremely important and widely studied due to increasing systems complexity. Because of this, when they are taught, it is extremely useful to identify whether or not students own Systems Thinking Competencies in order to design a specific teaching strategy. This research applied an Adapted Holistic Scoring Method to assess Concept Maps developed by postgraduate and undergraduate engineering students in order to identify Systems Thinking Competencies. It had two phases. At the first one, Students showed an acceptable knowledge of cost estimation drivers, and a certain level of Systems Thinking Competencies. In the second phase, both cost estimation drivers and Systems Thinking Competencies showed an improvement. Mann-Whitney U-test was applied in order to identify if there were significant differences between Phase 1 and Phase 2. Confidence level of 95%, and a significance level of 0.05 was considered.
A tender process consists in competing offers from different candidate suppliers or contractors. The tender winner is supposed to supply or provide a service in better conditions than competitors. Tenders are developed using centralized unverified systems, which reduce transparency, fairness and trust on the process, it also reduces the ability to detect malicious attempts to manipulate the process. Systems that provide formal verification, decentralization, authentication, trust and transparency can mitigate these risks. Satisfiability Modulo Theories provides a formal analysis to prove correctness of tender offers properties, verified properties ensures system reliability. In addition, one technology that claims to provide decentralization is Blockchain, a chain of distributed and decentralized records linked in a way such that integrity is ensured. This paper presents a formal verified and decentralized proposal system, based on Satisfiability Modulo Theories and Blockchain technology, to make electronic procurement tenders more reliable, transparent and fair.
Context: The impact of an excellent estimation in planning, budgeting, and control, makes the estimation activities an essential element for the software project success. Several estimation techniques have been developed during the last seven decades. Traditional regression-based is the most often estimation method used in the literature. The generation of models needs a reference database, which is usually a wedge-shaped dataset when real projects are considered. The use of regression-based estimation techniques provides low accuracy with this type of database. Objective: Evaluate and provide an alternative to the general practice of using regression-based models, looking if smooth curve methods and variable selection and regularization methods provide better reliability of the estimations based on the wedge-shaped form databases. Method: A previous study used a reference database with a wedge-shaped form to build a regression-based estimating model. This paper utilizes smooth curve methods and variable selection and regularization methods to build estimation models, providing an alternative to linear regression models. Results: The results show the improvement in the estimation results when smooth curve methods and variable selection and regularization methods are used against regression-based models when wedge-shaped form databases are considered. For example, GAM with all the variables show that the R-squared is for Effort: 0.6864 and for Cost: 0.7581; the MMRE is for Effort: 0.1095 and for Cost: 0.0578. The results for the GAM with LASSO show that the R-squared is for Effort: 0.6836 and for Cost: 0.7519; the MMRE is for Effort: 0.1105 and for Cost: 0.0585. In comparison to the R-squared is for Effort: 0.6790 and for Cost: 0.7540; the MMRE is for Effort: 0.1107 and for Cost: 0.0582 while using MLR.
Software analysis is the process carried out to obtain requirements that reflects the needs of a client's stakeholders and that allows the construction of a software product that meets their expectations. However, it is also known as a process where many defects are injected. In this context, although process improvement has contributed to the software industry, in the case of software requirements it needs to be studied to determine the improvements obtained and established models. In the literature reviewed, a similar mapping study with 4 research question was identified and used as a reference. The objective of this work is to structure the available literature on process improvement in the software requirements engineering (SRE) domain to identify the improvement phases, paradigms, principles, and established models. For this purpose, a systematic mapping study (SMS) was carried out in the most recognized digital databases. The mapping carried out recovered a total of 1,495 studies, and after the process, 86 primary studies were obtained. In this SMS had established and answered 13 research questions. The different models that are applied throughout the software requirements engineering process were identified, and accepted studies were classified and findings on SRE process improvement were collected. The most used models are CMMI, Requirements Engineering Good Practice Guide (REGPG), and ISO/IEC 15504. Also, 62% of accepted studies are of the proposal and evaluation type; that is, they propose a framework and study the implementation of a proposal in one or more case studies respectively. On the other hand, it was found that most of the studies focused on the process improvement analysis phase. Likewise, in contrast with a previous study, proposal and validation type of studies increased in 9 papers each one from 2014 to date. This shows the interest of the scientific community in this domain.
DevOps is a philosophy and framework that allows software development and operations teams to work in a coordinated manner, with the purpose of developing and releasing software quickly and cheaply. However, the effectiveness and benefits of DevOps depend on several factors, as reported in the literature. In particular, several studies have been published on software test automation, which is a cornerstone for the continuous integration phase in DevOps, which needs to be identified and classified. This study consolidates and classifies the existing literature on automated tests in the DevOps context. For the study, a systematic mapping study was performed to identify and classify papers on automated testing in DevOps based on 8 research questions. In the query of 6 relevant databases, 3,312 were obtained; and then, after the selection process, 299 papers were selected as primary studies. Researchers maintain a continuing and growing interest in software testing in the DevOps context. Most of the research (71.2%) is carried out in the industry and is done on web applications and SOA. The most reported types of tests are unit and integration tests.
This article presents a study of the publications made on the ISO/IEC 29110 standard in the university context, especially from the perspective of software engineering education. ISO 29110 is a life cycle profiles for very small entities on systems and software engineering standard, published in many parts. ISO 29110, since its publication in 2011 and its continuous evolution to these days, is the subject of study in different contexts, with education being a relevant axis. Considering, that software engineering education has implications in the software industry in emerging countries, it is necessary to identify and consolidate the work done in this context. In this study, the main research question was what researches have been done at ISO 29110 in the training of software engineers? To answer this question, a systematic mapping study (SMS) was performed. In the SMS, 241 articles were obtained with search string and 17 of them became as primary study after a process selection. Based on these studies, it was possible to determine that the software engineering Basic profile of ISO 29110 and its processes (Project Management and Software Implementation) have been the most studied. Besides, it was identified that project-oriented learning and gamification techniques have been the most used ISO 29110 learning strategies in the training of future software industry professionals.
Applications that work with data are required to ensure their reliable storage. The interfaces available for working with file systems are not sufficiently specified and require high qualifications for correct use that does not lead to loss of user data. As part of this work, a tool was developed that provides developers with the opportunity to test their applications and identify the most common errors. The tool is based on collecting events from the interaction of the application with the file system and then running checks that can indicate errors. The tool implements a modular architecture that allows you to expand the available set of checks. The developed tool was integrated into the process of testing the implementation of a durable log, similar to the write ahead log, a component implemented in many database management systems. The tool allowed to detect and correct several errors leading to possible data loss.
This paper studies the occurrence of insecure deserialization in communication between client-side code and the server-side of a web application. Special attention was paid to serialized objects sent from JavaScript client-side code. Specific patterns of using serialized objects within the client-side JavaScript code were identified and unique classes were formulated, whose main goal is to facilitate manual and automatic analysis of web applications. A tool that detects a serialized object in the client-side code of a web page has been designed and implemented. This tool is capable of finding encoded serialized objects as well as serialized objects encoded using several sequentially applied encodings. For found samples of serialized objects, the tool determines the context in which the found object appears on the page. For objects inside JavaScript code, the tool identifies the previously mentioned classes by mapping the vertices of the abstract syntax tree (AST) of the code. Web application endpoints were checked for whether programming objects were deserialized on the server side, after obtaining the results of the study. As a result of this check, previously unknown vulnerabilities were found, which were reported to the developers of this software. One of them was identified as CVE-2022-24108. Based on the results of this research, a method was proposed to facilitate both manual and automated searches for vulnerabilities of the "Deserialization of untrusted data". The proposed algorithm was tested on more than 50,000 web application pages from the Alexa Top 1M list, as well as on 20,000 web application pages from Bug Bounty programs.
One possible way to reduce bugs in source code is to create intelligent tools that make the development process easier. Such tools often use vector representations of the source code and machine learning techniques borrowed from the field of natural language processing. However, such approaches do not take into account the specifics of the source code and its structure. This work studies methods for pretraining graph vector representations for source code, where the graph represents the structure of the program. The results show that graph embeddings allow to achieve an accuracy of classifying variable types in Python programs that is comparable to CodeBERT embeddings. Moreover, the simultaneous use of text and graph embeddings as part of a hybrid model can improve the accuracy of type classification by more than 10%.
ISSN 2220-6426 (Online)