BANNERSITE1O Workshop do Laboratório MídiaCom tem como objetivo divulgar os trabalhos de pesquisa e desenvolvimento nas áreas de Redes de Computadores e Sistemas Multimídia desenvolvidos pela equipe de alunos e pesquisadores do MídiaCom. É um evento anual que comemora o aniversário do Laboratório MídiaCom, que foi fundado em outubro de 2003. Na edição de 2021, comemoramos 18 anos de P&D. O evento será realizado nos dias 20 e 21 de setembro de 2021 e será totalmente online.

20 e 21 de setembro de 2021 de 09:00 às 18:00h
Local: Link a ser divulgado aos participantes

Coordenação do V Workshop do Laboratório MídiaCom – 18 anos de P&D
Prof. Célio Vinicius Neves de Albuquerque

O Workshop do Laboratório MídiaCom tem como objetivo divulgar os trabalhos de pesquisa e desenvolvimento nas áreas de Redes de Computadores e Sistemas Multimídia desenvolvidos pela equipe de alunos e pesquisadores do MídiaCom. É um evento anual que comemora o aniversário do Laboratório MídiaCom, que foi fundado em outubro de 2003. Na edição de 2021, comemoramos 18 anos de P&D. O evento será realizado nos dias 20 e 21 de setembro de 2021 e será totalmente online.

WMC19 - Technical Session 4 - WBANs and eHealth

Title: A Wireless Body Area Networks transmission scheduler based on human body movements

Author: Vinicius Correa Ferreira


Abstract: Advances in electronics have enabled the development of intelligent miniaturized biomedical sensors that can be used to monitor the human body. The use of wireless communication proved to be an alternative, which provides less discomfort to patients and good cost-benefit. In order to fully exploit the benefits of wireless technologies in telemedicine, a new type of wireless networks has emerged: Wireless Body Area Networks (WBANs). However, technical and social challenges must be addressed to enable their adoption. Some factors such as the use of the human body as a propagation media, the effects of radiation on human tissue and the human body movement, make WBANs a new paradigm of wireless communication networks. To meet the requirements of WBAN applications, while preserving the energy efficiency and the user's physical safety, this paper proposes a transmission scheduler based on the movement of the human body. Improvements in the packet delivery rate, and energy efficiency are observed when compared to polling and random media access (CSMA/CA).




Title: Simulation of ISO/IEEE 11073 Personal Health Devices in WBANs

Author: Robson Araújo Lima


Abstract: Simulating new protocols for e-health systems is very important, as it allows an initial evaluation before a real implementation is made. On the other hand, network simulators do not offer proper support to represent medical applications or components to facilitate running simulations modeling e-health applications. The lack of simulators that specify the sensor type and its communication requirements make real experiments harder. Aiming at fulfilling this gap, this paper proposes the use of ISO/IEEE 11073 standard for Personal Health Devices (X73-PHD) in e-health network simulations, representing realistic medical applications and investigating the behavior of medical devices (sensors or actuators) in Wireless Body Area Network (WBAN) scenarios. We developed a free and open-source implementation of X73-PHD for Castalia Simulator, providing five different PHD types to act like real ISO/IEEE11073 devices in WBAN simulations. Our implementation supportsAgent-initiated mode, where PHDs take the initiative to send measurements to the hub. Our implementation also supports the unconfirmed communication mode and the confirmed communication mode, where the receiver sends an acknowledgment to the sender every time it receives a packet. Simulation results showed that the confirmed communication mode did not perform well in WBANs when the interval between transmissions is too small, due to the long period of timeout proposed in the X73-PHD standard. There-fore, we propose a new extension to the confirmed mode standard that decreases the overhead of control packets over the network, using smaller timeouts and delivering more pac



Title: Identifying Post-Traumatic Stress Symptoms Using Physiological Signals and Artificial Intelligence

Author: Luiz Antonio da Ponte Junior


Abstract: The number of people diagnosed with an anxiety disorder has increased. The correct diagnosis of such disorders is not always a trivial task, forcing the individual often consulting with many clinicians and performing several medical exams. Post-traumatic Stress Disorder (PTSD) is a disorder related to experienced events, which presented a certain degree of threat to an individual. When experiencing situations that refer to past events, an individual may present reactions that trigger physiological changes in his/her organism such as tachycardia or bradycardia. Many disorders have common symptoms, and realizing these subtleties results in the diagnosis' efficiency and effectiveness. Artificial Intelligence (AI) techniques has helped specialists in the diagnosis and prevention of diseases and disorders, accelerating the process and increasing its effectiveness. In this paper, we aim at finding new biomarkers to diagnose PTSD analyzing physiological signals with AI techniques. We used a dataset from an experiment with civilians that were recently exposed to traumatic events related to violence. Those individuals completed a questionnaire that evaluates the impact of such events through PCL (PTSD Disorder Checklist for DSM-IV) scale. Heart rate and skin conductance signals were collected while viewing emotional and neutral stimuli images. We applied data mining techniques and classification algorithms to evaluate and maximize PCL score prediction performance considering those physiological signal data. The best result was obtained with Naive Bayes algorithm, after applying supervised discretization and attribute selection, presenting an accuracy of 96.36% (p-value = 0.001), F-Measure of 0.9636 and AUC (Area Under ROC Curve) of 0.9681.




Title: Digital Cardiology: A mobile app to support the preparation for the exam 18-FDG PET CT for infection endocarditis patient

Author: Celine Soares


Abstract: The Positron Emission Tomography to cardiac examinations has been more utilized as a part of an image repertoire. A recurrent issue in the PEC-CT preparation is the need for a reduction of the glucose capture by the myocardium. In order to achieve proper suppression and, consequently, increased accuracy, it's mandatory to modify the diet three days before the 18FFDG CT. Under the auspices of a multidisciplinary team, we are developing with the Ionic4 framework for multiplatform, a mobile application able to register and inform the user of the proper diet that the patient should perform for three days before the exam. Once installed on smartphones It is planned to analyze the mobile app implementation and to evaluate if the project is able to offer practicality, the democratization of access and provide the quality indicators to managers.

WMC19 - Technical Session 3 - Multimedia and Edge Computing

Title: Semi-automatic Synchronization of Sensory Effects in Mulsemedia Authoring Tools

Author: Raphael Silva de Abreu


Abstract: Synchronization of sensory effects with multimedia content is a non- trivial and error-prone task that can discourage authoring of mulsemedia applications. Although there are authoring tools that assist in the specification of sensory effect metadata in an automated way, the forms of analysis used by them are not general enough to identify complex components that may be related to sensory effects. In this work, we present an intelligent component, which allows the semi-automatic definition of sensory effects. This component uses a neural network to extract information from video scenes. This information is used to define sensory effects synchronously to related videos. The proposed component was implemented in STEVE~2.0 authoring tool, helping the authoring of sensory effects in a graphical interface.




Title: Automatic Preparation of Media Objects in Multimedia Applications

Author: Marina Ivanov


Abstract: In multimedia applications, spatiotemporal relationships among media

objects should be controlled during the execution phase in order to preserve the quality of presentation. When the content that composes the application is delivered over a communication network, some delays may occur due to network congestion problems. In order to avoid synchronization faults during the presentation of the distributed multimedia applications, this work proposes the automatic preparation of media objects. The automatic preparation of media objects aim to ensure that all media objects are available in the receiver device at their presentation moment. In our proposal, the multimedia presentation engine (formatter) builds a preparation plan based on the network conditions and the presentation behavior learned from the multimedia document that defines the application. As proof of concept, we implemented the automatic creation of the preparation plan in the Ginga-NCL middleware. Furthermore, a use case is presented to demonstrate the automatic preparation for NCL applications. Finally, a brief discussion about garbage collection in multimedia applications containing non-deterministic events is also presented.




Title: V-PRISM: An edge-based architecture to virtualize multimedia sensors in the Internet of Things

Author: Anselmo Luiz Eden Battisti


Abstract: The Internet of Things (IoT) enables the interconnection with the Internet of the most varied physical objects, instrumented by intelligent sensors and actuators. By addressing physical objects and making them part of a global network, the IoT has the potential to provide novel applications to make life easier and healthier for citizens, to increase the productivity of companies and to promote the building of more intelligent and sustainable cities, environments and countries. Several types of sensors compose the IoT ecosystem. Among them, multimedia sensors have recently become a major source of data, giving raise to the Internet of Multimedia Things. With the wide spread of the Internet of Things, and its integration with Cloud platforms, a novel paradigm called Cloud of Things (CoT) has recently emerged. In this context, the cloud works as an intermediary between the sensors/IoT devices and applications. CoT system is strongly based on the concept of virtualization, to help dealing with the complexity raised by the heterogeneity of the sensors. However, multimedia applications usually are latency-sensitive, therefore the data processing in the remote cloud is not always effective. A strategy to minimize the latency is to process the multimedia stream closer to the data sources, exploiting the resources at the edge of the networks. Therefore, in this paper, we propose V-PRISM, an architecture to virtualize multimedia sensors with components deployed and executed at the edge tier of a Cloud of Things ecosystem. The adoption of V-PRISM can reduce the battery and CPU consumption at the IoT devices, reduce the traffic at the IoT network, decrease the end-to-end latency and increase the ROI for infrastructure providers.

WMC19 - Keynote Session 1 - George Ghinea


Prof. George Ghinea
Brunel University


Title: Mulsemedia in 360 VR


Abstract: Previous research has shown that adding multisensory media---mulsemedia---to traditional audiovisual content has a positive effect on user Quality of Experience (QoE). However, the QoE impact of employing mulsemedia in 360 videos has remained unexplored. Accordingly, in this talk, Prof. Ghinea is going to present results of a recently concluded QoE study for watching 360 videos---with and without multisensory effects---in a full free-viewpoint Virtual Reality.


Short Bio: Dr. Gheorghita (George) Ghinea is a Professor in Mulsemedia Computing in the Department of Computer Science, at Brunel University. Dr. Ghinea's research activities lie at the confluence of Computer Science, Media and Psychology. In particular, his work focuses on the area of perceptual multimedia quality and how one builds end-to-end communication systems incorporating user perceptual requirements. To this end, recognising the infotainment duality of multimedia, Dr. Ghinea proposed the Quality of Perception metric as a more complete characterisation of the human side of the multimedia perceptual experience. Dr. Ghinea has applied his expertise in areas such as eye-tracking,telemedicine, multi-modal interaction, and ubiquitous and mobile computing, leading a team of 8 researchers in these areas. He has over 300 publications in his research field and is the lead Brunel investigator of a H2020 project NEWTON ( applying mulsemedia to STEM learning across Europe.




Technical Session 2 - AI and Intelligent Services

Title: A Counselors-Based Intrusion Detection Architecture

Author: Silvio Ereno Quincozes


Abstract: Intrusion Detection Systems (IDSs) are a fundamental component of defensive solutions. In particular, signature-based IDSs aim to detect malicious activities on computer systems and networks by relying on data classification models built from a training dataset. However, classifiers performance can vary for each attack pattern. A common technique to overcome this issue is to use ensemble methods, where multiple classifiers are employed and a final decision is taken combining their outputs. Despite the potential advantages of such an approach, its usefulness is limited in scenarios where (i) multiple expert classifiers present divergent results or (ii) representative data are missing to detect a specific attack class. In this work, we introduce the concept of counselor networks to deal with conflicts from different classifiers by exploiting the collaboration between IDSs that analyze multiple and heterogeneous data sources. Our empirical results demonstrate the feasibility of the proposed architecture in improving the accuracy of the intrusion detection process.




Title: A Cache Prefetch Policy based on Users’ Temporal-and-Social Behavior for Content Management in Wireless Access Networks

Author: Cleomar Márcio Marques de Oliveira


Abstract: In this article, a new policy is proposed for store and drops cache content in the Wireless Access Networks nodes. The proposed policy select content that can be dropped and new content to be cached in a network node, on predefined time periods at each day and with pre-established time duration each one, repeated at each day. The temporal aspects and users social behavior that connect to the node for decision making are considered. An algorithm selects new content, in the same proportion, from those categories historically requested ones in these time periods on previous day. These new content selection purpose is to cache them in current day, at the corresponding time periods, to increase the content request hit ratio according to this policy. Simulation results against established FIFO, LRU, LFU and RANDOM policies show that this proposed policy hit ratio is 2.46 times higher than the others, with an average hit ratio of 13.1% here versus an average hit ratio 5.325% of the above cited policies, in the evaluated scenarios.




Title: Using ubus for collecting data and remote configuration of OpenWRT Access Points

Author: Yago de Rezende dos Santos


Abstract: As SCIFI progressed, different protocols were used for collecting data from and configuring Access Points. Originally ssh and scp were used for increased security on an open network. When the control network was isolated on it's own vlan, data collection and configuration migrated to snmp. Finally, the new design calls for rpc using the ubus infrastructure.





Title: Natural Language Processing Characterization of Recurring Calls in Public Security Services

Author: Nicollas Rodrigues De Oliveira


Abstract: Extracting knowledge from unstructured data silos, a legacy of old applications, is mandatory for improving the governance of today's cities and fostering the creation of smart cities. Texts in natural language often compose such data. Nevertheless, the inference of useful information from a linguistic-computational analysis of natural language data is an open challenge. In this paper, we propose a clustering method to analyze textual data employing the unsupervised machine learning algorithms k-means and hierarchical clustering. We assess different vector representation methods for text, similarity metrics, and the number of clusters that best matches the data. We evaluate the methods using a real database of a public record service of security occurrences. The results show that the k-means algorithm using Euclidean distance extracts non-trivial knowledge, reaching up to 93% accuracy in a set of test samples while identifying the 12 most prevalent occurrence patterns.

Pagina 1 de 12