loading...

Doctoral Program

Doctoral Program

Postgraduate Colloquium Series

The Postgraduate Colloquium Series is an opportunity for Thesis students to present their on-going research work to the DEI and CISUC research communities, including all Ph.D. and Master's students, academic and research staff. Other interested parties are most welcome to attend, as well.

Schedule and Venue

- DEI Building (Polo II) - Amphitheater B1, at 14.00h (sharp!)

- To receive regular updates about the Postgraduate Colloquium Series, join our mailing list here.

Next Colloquium


15/03/2013

"Bipolar EEG processing for seizure prediction and detection"

Mojtaba Bandarabadi

CISUC - AC

Abstract:

"Epilepsy is the second most prevalent brain disorder. About 60 million people worldwide suffer from epileptic seizures. An accurate seizure prediction algorithm implemented on an implantable device would allow for novel reactive therapies acting on time scales of seconds to minutes prior to the seizure onset. Any success in real-time seizure prediction could improve the living conditions of epileptic patients.

Bivariate measures extracted from electroencephalogram (EEG) data, provide a quantitative measure of interactions between different brain regions. Bipolar analysis of spectral powers of multichannel EEG and intracranial EEG (iEEG) are introduced to track the gradual changes from interictal to the ictal state, in patients suffering from refractory partial epileptic seizures. The methods are very low complexity and suitable for real-time analysis of EEG and iEEG signals.

Considering all possible channel pairs creates a very highly dimensional feature space. Feature selection methods are developed to choose the best candidate features and to reduce the feature space dimension. The selected features are then fed to classifiers to classify the cerebral state in preictal and non-preictal classes. In one study, we compare two classifiers, a low complexity Adaboost to the more complex support vector machine (SVM). Adaboost is a linear classier using decision stumps, and SVM uses a nonlinear Gaussian kernel."

__________________________________________________________________________________________________________________
Forthcoming

Date

Student

Research Group

Title

22/03
---
---
---
05/04
TBA
TBA
TBA
12/04
TBA
TBA
TBA
19/04
José Marinho
LCT
Enhanced Protection of Primary Users in Distributed Cognitive Radio Scenarios
26/04
Marco Veloso
CMS
Study of  urban mobility through the analysis of taxi patterns
03/05
TBA
TBA
TBA
17/05
Naghmeh Ivaki SSE
TBA
24/05
Gil Gonçalves INESC CoimbraUsing ALS data and CIR ortho-images for DTM production in urban areas: a data fusion approach
31/05
Vitor Bernardo LCT
Towards Energy-Efficiency in Wireless Networks
__________________________________________________________________________________________________________________
__________________________________________________________________________________________________________________
Past Colloquia

Date

Student

Research Group

Title

08/03
Rituparna DattaKangal, Indian Institute of Technology at Kanpur A Hybrid Evolutionary-Penalty Approach for Constrained Optimization: Development and Applications
01/03
Samuel Neves
SSE
BLAKE2: Simpler, Smaller, Fast as MD5
22/02
Dina Soeiro
IS
Student Empowerment in Higher Education through Democratic Strategies and Participatory Evaluation
14/12
David Bowman
CBME
Application Performance Optimization Using Computational and Non-computational Methods
07/12
Norberto Henriques
CMS
Integration of GPS Traces and Digital Elevation Maps for Improving Bicycle Traffic Simulation Behavio
30/11
Davi Baccan
CMS
The Role of Surprise in Agent-based Computational Economics (ACE)
23/11
Cidália Fonte
DMUC
The usefulness of the uncertainty information in the automatic classification of multispectral images
09/11
Andreia Guerreiro
ECOS
Efficient Algorithms for the Assessment of Stochastic Multiobjective Optimizers
02/11
João Franco
SSE
Automated Reliability Prediction & Analysis of SwAs
26/10
Sebastian SchenkerZuse Institute BerlinAbout an implementation of a 3-objective linear programming solver
19/10
António Oliveira
CMS
A Musical System for Emotional Expression
12/10
Noel Lopes
AC
Fast training of restricted Boltzmann machines
28/09
Nuno Laranjeiro
SSE
Assessing the Robustness of Software Services
21/09
David Palma
LCT
Towards Scalable Routing for Wireless Multi-hop Networks
08/06
Hugo Oliveira
CMS
Onto.PT: Towards the Automatic Construction of a Lexical Ontology for Portuguese
01/06
Bruno Antunes
CMS
Context-Based Retrieval in Software Development
25/05
Ricardo Couceiro
AC
Cardiovascular performance assessment for p-health applications
18/05
Oswaldo LudwigISR
Study on non-parametric methods for fast pattern recognition with emphasis on neural networks and cascade classifiers
11/05
Valtar Alves
IS
Sound Design Guidance as a Contribution towards the Empowerment of Indie Game Developers
04/05
João P. Magalhães
SSE
Pinpointing Anomalies in Web-based Applications by Using Application-level Monitoring
27/04
Rui Lopes
ECOS
Exploring Regulatory Mechanisms in Evolutionary Computation
20/04
Dirson Campos
CMS
Generation of different model of feedbacks based on software testing and rubrics to support programming learning
13/04
Dinesh Kumar AC
Aid to Diagnosis of Cardiovascular Diseases by Heart sound analysis
30/03
Pedro Martins
AC
On The Saliency and Stability of Local Features
23/03
Nuno Antunes
SSE
The Devils Behind Web Application Vulnerabilities
16/03
Marisa Figueiredo
CMS
Electrical Signal Disaggregation using Decomposition Methods
09/03
Ramona CabidduAC
Autonomic nervous modulation of cardiorespiratory coupling during sleep
02/03
Tiago Baptista
ECOS
Complexity and Emergence in Societies of Agents
24/02 Ana Oliveira Alves
CMS
Semantic Enrichment of Places: Understanding the Meaning of Public Places from Natural Language Texts
17/02Bruno Miguel Direito
AC
EEG signal processing for epileptic seizure prediction
__________________________________________________________________________________________________________________


08/03/2013

"A Hybrid Evolutionary-Penalty Approach for Constrained Optimization: Development and Applications"

Rituparna Datta

Kangal, Indian Institute of Technology at Kanpur

Abstract:

"The holy grail of constrained optimization is the development of an efficient, scale invariant and generic constraint handling procedure in single and multi-objective constrained optimization problems. Most optimization problems in science and engineering consist of one or many constraints, which come into picture mainly due to some physical limitations or functional requirements. Constraints can be divided into inequality type and equality type, but the challenge is to obtain feasible solutions that satisfy all constraints with minimal computational effort.

The classical penalty function approach is a widely used constraint handling method, in which the objective function value is penalized in proportion to the constraint violation. Initially Evolutionary Algorithms (EAs) were designed for unconstrained optimization but have now evolved to include various constraint handling mechanisms. In this work, we propose a synergistic combination of bi-objective evolutionary approach with the penalty function methodology, to solve problems with single objective having inequality constraints. This methodology is then extended for equality and inequality+equality mixed constrained problems. Normalization of constraints is crucial for the efficient performance of any constraint handling algorithm. Therefore, our method is extended to normalize all the constraints adaptively during the optimization process. Having developed efficient constraint handling strategies for single-objective optimization problems we then develop constraint handling strategy for bi-objective optimization problems. We demonstrate the efficacy of the proposed method by solving real world single and multi-objective constrained optimization problems."
Top | Past Colloquia | Slides


01/03/2013

"BLAKE2: Simpler, Smaller, Fast as MD5"

Samuel Neves

CISUC - SSE

Abstract:

We present the hash function BLAKE2, an improved variant of the SHA-3 finalist BLAKE optimized for speed in software. Target applications include cloud storage, intrusion detection, or version control systems. BLAKE2 comes in two main flavors: BLAKE2b is optimized for 64-bit platforms, and BLAKE2s for smaller architectures. On 64-bit platforms, BLAKE2 is often faster than MD5, yet provides security similar to that of SHA-3: up to 256-bit collision resistance, immunity to length extension, indifferentiability from a random oracle, etc. We also specify parallel versions BLAKE2bp and BLAKE2sp that are up to 4 and 8 times faster, by taking advantage of SIMD and/or multiple cores. BLAKE2 reduces the RAM requirements of BLAKE down to 168 bytes, making it smaller than any of the five SHA-3 finalists, and 32% smaller than BLAKE. Finally, BLAKE2 provides a comprehensive support for tree-hashing as well as keyed hashing (both in sequential or tree mode).
Top | Past Colloquia | Slides


22/02/2013

"Student Empowerment in Higher Education through Democratic Strategies and Participatory Evaluation"

Dina Soeiro

Information Systems - CISUC

Abstract:

The interdisciplinary domain that links Education to Engineering, Information Systems, and Computer Science has become very prominent in the last decades. Institutions such as the ASEE, the IEEE, and the ACM, with their publications, conferences and interest groups explicitly devoted to educational research and innovation, have been essential in this respect. The pressures put upon higher education by the recent wave of HE accountability, cost cutting, diversity of publics, and open education have contributed to reinforce the importance of this domain. One of the major issues in this respect is that of improving the quality of education while simultaneously reducing the educator/student ratio. This implies, amongst other aspects, the investigation of educational approaches capable of scaling up smoothly, namely by increasing the autonomy of the students and their ability to engage individually and collectively in the active construction of their knowledge. Similar challenges are being experience recently in the operation of MOOCs (massive online open courses) that are followed simultaneously by various thousands of students. My thesis addresses the core of this issue.
Top | Past Colloquia | Slides


14/12/2012

"Application Performance Optimization Using Computational and Non-computational Methods"

David Bowman

CBME-University of Algarve

Abstract:

The seminar will include discussion of some of the problems and approaches to developing high performance software applications. Two example cases will be used for illustration from the areas of molecular dynamics and systems biology.

1) The class of application problems that require highly optimized solutions
2) Limitations of existing CPUs and hardware platforms
3) Capabilities of development tools that support performance optimization
4) Limitations of existing tools optimization
5) Approaches using incremental calculation and non-computation methods.
Top | Past Colloquia | Slides

07/12/2012

"Integration of GPS Traces and Digital Elevation Maps for Improving Bicycle Traffic Simulation Behavior"

Norberto Henriques
Cognitive and Media Systems - CISUC

Abstract:

In order to take in consideration bicycle traffic in realistic simulation, one should consider more than a simple approach based on overall average speed and acceleration. There are several different factors that should be considered, such as the diversity of people's fitness that ride bicycles on the road, the way speed varies along segments and more importantly how terrain slope influences rider's behavior. In this presentation I will explain our approach for simulating bicycle traffic using real GPS data as source for enhancing simulation. We discuss the specific details related to the analysis process performed on a set of bicycle commuting traces, as well as the development of a custom driver behavior for VISSIM that uses statistic data taken from those bicycle GPS traces. We also propose a new methodology for considering the way slope affects bicycle traffic, as this is one of the most relevant factors related to bicycle riding.
Top | Past Colloquia | Slides


30/11/2012

"The Role of Surprise in Agent-based Computational Economics (ACE)"

Davi Baccan
Cognitive and Media Systems - CISUC

Abstract:

Decentralized market economies, such as a stock market, are complex systems that involve large numbers of heterogeneous market participants that interact by means of buying or selling assets. In addition to the inherent complexity of financial markets, highly unexpected market movements together with behavioral economics findings constitute evidences that market participants are not fully rational, stressing the necessity of adopting novel approaches. In financial markets, participants are often confronted with unexpected events and receive contradicting new information.  My hypothesis is that cognitive modelling approaches, with special focus on surprise, help us to explain and understand the behaviour of financial markets, both on the micro and macro level.
Top | Past Colloquia | Slides

23/11/2012

"The usefulness of the uncertainty information in the automatic classification of multispectral images"

Cidália Fonte
Department of Mathematics, University of Coimbra

Abstract:
Multispectral images are an important source of data to produce Land Cover Maps (LCM), fundamental for a wide range of applications. The production of LCM is usually made through the automatic classification of satellite or aerial images, assigning to the pixels the classes of interest. However, uncertainty is present in the several steps of image classification, as well as in the validation process. Traditionally the uncertainty of the classification results is not computed, but it was shown that it may be valuable in several phases of LCM production. In this presentation an introduction is made to the production of LCM through the classification of multispectral images, including a brief review of some image classification methodologies and the assessment of classification accuracy. The classification with soft classifiers is explained with more detail and some approaches to estimate the classification uncertainty are shown. Some applications of the uncertainty information are then presented, namely its use to: 1) assess the spatial variation of the classification accuracy; 2) refining of training sets used in supervised classifications; 3) improve classification accuracy using a combination of classifiers.
Top | Past Colloquia | Slides

09/11/2012
"Efficient Algorithms for the Assessment of Stochastic Multiobjective Optimizers"

Andreia Guerreiro
Evolutionary and Complex Systems - CISUC

Abstract:

"In multiobjective optimization, usually due to the complex nature of the problems considered, it is sometimes not possible to find all optimal solutions in a reasonable amount of time. One alternative is to use methods to find good approximation sets, such as Stochastic Multiobjective Optimizers. To assess the outcomes produced by these multiobjective optimizers, quality indicators such as the hypervolume indicator have been proposed. The hypervolume indicator is one of the most studied unary quality indicators, and algorithms are available to compute it in any number of dimensions. It has also been included in stochastic optimizers, such as evolutionary algorithms. Therefore, the time spent computing the hypervolume indicator is an important issue. An alternative to the hypervolume indicator is the Empirical Attainment Function (EAF), which is oriented towards the study of the distribution of the output of different executions of the same optimizer, but the algorithms available are restricted to two and three dimensions. In this presentation, the hypervolume indicator and the EAF will be explained and the state-of-the-art algorithms to compute each of them will be briefly reviewed. Moreover, new algorithms to compute the hypervolume indicator and new algorithms to compute the EAF will be presented. In these new algorithms, both the dimension-sweep and the multidimensional divide-and-conquer paradigms are considered."

Top | Past Colloquia | Slides


02/11/2012
"Automated Reliability Prediction & Analysis of SwAs"

João Franco
Software and Systems Engineering - CISUC

Abstract:

"The quantitative assessment of quality attributes on software architectures allow to support early decisions in the design phase, certify quality requirements established by stakeholders and improve software quality in future architectural changes. This talk concerns on how we provide means for architects predict and analyse reliability constraints on software architectures. We automatically generate a stochastic model from an Architecture Description Language (ADL) specification. This model exhibits the proper behaviour of the system, allowing to quantitatively predict and analyze reliability in order to identify issues and bottlenecks that are negatively influencing the overall system reliability. Hence, our approach will help architects to avoid undesired or infeasible architectural designs and prevent extra costs in fixing late life­-cycle undetected problems."

Top | Past Colloquia | Slides


26/10/2012
"About an implementation of a 3-objective linear programming solver"

Sebastian Schenker
Zuse Institute Berlin

Abstract:

"Despite the growing interest in multi-objective optimization and the wide availability of commercial and non-commercial single objective solvers there exist currently no multi-objective (linear) programming solvers available to the scientific community. In this talk I want to present an implementation of a 3-objective linear programming solver that computes all extreme non-dominated points (and the weight space decomposition). The first part of the talk is concerned with a worst-case example based on deformed cubes that shows that we cannot expect a small number of extreme non-dominated points even for a fixed number of objectives and a convex domain. In the second part of the talk I want to present the basic ideas behind the solver which is based on the weight space decomposition approaches of Benson/Sun and Gandibleux/Przybylski as well as the solver itself.

Top | Past Colloquia | Slides


19/10/2012
"A Musical System for Emotional Expression"

António Oliveira
Cognitive and Media Systems - CISUC

Abstract:

"The automatic control of emotional expression in (tonal) music is a challenge that is far from being solved.  This thesis presents EDME - a system with such capabilities used for the generation of novel musical works which express a particular emotion as specified by the user. The system works with standard MIDI files and develops in two stages: the first offline, the second online. In the first stage, MIDI files are partitioned in segments with uniform emotional content. These are subjected to a process of feature extraction, then classified according to emotional values of valence and arousal and stored in a music base. In the second stage, segments are selected and transformed according to user specified emotion and then arranged into song-like structures. The modularity, adaptability and flexibility of our system’s architecture make it applicable in various contexts like video-games, theatre, films and healthcare contexts. The system is using a knowledge base, grounded on empirical results of works of Music Psychology, which was refined with experimental data obtained with questionnaires. For the experimental setups, we prepared questionnaires with musical segments of different emotional content. Each subject classified each segment after listening to it, with values for valence and arousal. We inferred that the experiments conducted via online had a high degree of reliability, despite the fact of being done in a non-controlled context.
We also calibrated/validated EDME in two experiments where we intended to verify the accuracy of EDME in classifying valence and arousal by using experimental data obtained in a controlled environment. The first experiment collected data with questionnaires based on Self-Assessment Manikin. The second experiment collected behavioral and physiological data. The data show that corrugator muscle activity increase with arousal; heart rate measure in beats per minute increase with arousal, and galvanic skin response increase with both valence and arousal. Only for zigomatic muscle activity there is a significant increase with both, valence and arousal."

Top | Past Colloquia | Slides


12/10/2012
"Fast training of restricted Boltzmann machines"

Noel Lopes
Adaptive Computation - CISUC

Abstract:
"Restricted Boltzmann Machines (RBMs) have recently received much attention due to their potential to integrate more complex and deeper learning architectures. In particular, they are the building blocks of Deep Belief Networks (DBNs), which were recently proposed by Hinton et al. DBNs have been successfully applied to several domains including classification, regression, dimensionality reduction, object segmentation, information retrieval, robotics, natural language processing, and collaborative filtering among others. Nevertheless, training a DBN is a computationally expensive task that involves training several RBMs and requires a considerable amount of time. In this context, we present a Graphics Processing Units (GPU) parallel implementation of RBMs as well as an adaptive step size method which accelerates their convergence. The resulting implementation yielded excellent results in the MNIST handwritten dataset, reducing drastically the training time."

Top | Past Colloquia | Slides

28/09/2012 
"Assessing the Robustness of Software Services"

Nuno Laranjeiro
Software and Systems Engineering - CISUC
Abstract:
"Robustness testing is an effective approach to characterize the behavior of a system in presence of erroneous input conditions. It has been used mainly to assess robustness of operating systems (OS) and OS kernels, but the concept of robustness testing can be applied to any kind of interface. Robustness tests stimulate the system under testing through its interfaces submitting erroneous input conditions that may trigger internal errors or may reveal vulnerabilities. Nowadays, web and messaging services are increasingly being used in business-critical and enterprise application environments. These technologies are supported by a complex software infrastructure that must provide a robust service to the client applications. In this presentation, we describe a practical approach for the evaluation of the robustness of services and present an overview of two cases studies that target web services implementations and major messaging middleware providers."

Top | Past Colloquia | Slides

21/09/2012
"Towards Scalable Routing for Wireless Multi-hop Networks"

David Palma
Laboratory of Communications and Telematics - CISUC
Abstract:
"The growing diffusion of wireless interfaces (namely using the IEEE 802.11 standard) in the most diverse type of equipments has lead to a myriad of new networking scenarios. Several wireless capable gadgets are expected to be interconnected, demanding an increasing amount of resources to existing network infrastructures. In order to suppress these networking needs, in any possible scenario, the Ad-hoc paradigm allows the creation of autonomous infrastructure-less networks, capable of heterogeneously guarantee communication between these wireless devices. Even though previous works already exist regarding routing in Mobile Ad-hoc Networks (MANETs), the increasing demand for these networks revealed that these protocols do not scale accordingly. Bearing this issue in mind, the presented work addresses the scalability of routing protocols, proposing a new routing paradigm capable of handling large-scale networks, benefiting from the contextual proximity between users. The Deferred Routing scheme is presented by introducing a well defined network organisation with different granularity levels, using a hierarchical structure to handle existing clusters of nodes, in conjunction with virtual clusters that aggregate the real clusters. This organisation provides nodes with more stable network views, detracting the unwanted effects of mobility between neighbour clusters. The routing stability and scalable mechanisms are also achieved by using a packet forwarding technique in which the routing information is progressively more accurate, as the level of routing information detail increases when nodes are closer to the desired destination. During the forwarding process, nodes in the borders of clusters, or Gateway nodes, are identified in order to cross different clusters. In the gateway selection process a link quality estimation model is used, ensuring that the best existing gateway nodes are selected, implicitly achieving a balanced load between the available gateways. This forwarding approach further improves the performance of the proposed protocol, being extremely resilient to network changes and enabling self-healing properties of the chosen paths, as they are maintained by the different gateway nodes across clusters."

Top | Past Colloquia | Slides

08/06/2012

“Onto.PT: Towards the Automatic Construction of a Lexical Ontology for Portuguese

Hugo Oliveira

Cognitive and Media Systems - CISUC
Abstract:

"Given the importance that a lexical ontology plays in the development of natural language processing tools for a language, as well as the work involved in the manual creation of such a resource, we aim at the automatic construction of such a resource for Portuguese. In our work, several public resources for Portuguese, including dictionaries and thesauri, are exploited and integrated in Onto.PT, a public lexical ontology, structured as a wordnet. In this presentation, we will describe the three automatic steps involved in the construction of lexical ontologies from text, including some interesting results of each step, and will present the current version of Onto.PT, released on April 2012."

Top | Past Colloquia | Slides


01/06/2012

“Context-Based Retrieval in Software Development

Bruno Antunes

Cognitive and Media Systems - CISUC

Abstract:

"As software development projects increase in size and complexity, developers are becoming overloaded. During their work, they need to cope with a large amount of contextual information that is typically not captured and processed in order to enrich their work environment. Especially, in the IDE, they are required to repeatedly switch between different source code artifacts, which often depends on searching for these artifacts in a source code structure of hundreds, if not thousands. We propose a context-based approach to the retrieval of relevant source code artifacts in the work environment of the developer. The source code artifacts, and their relations, are represented using ontologies. A context model represents the focus of attention of the developer at each moment and adapts to changes through the automatic detection of context transitions. This context model is used in conjunction with the ontologies to improve the search and recommendation of source code artifacts in the IDE. We have developed a plugin for Eclipse that is being used by developers in a real world environment. The preliminary results show that our approach has a very positive impact on retrieving relevant source code artifacts for the developer."

Top | Past Colloquia | Slides


25/05/2012

“Cardiovascular performance assessment for p-health applications

Ricardo Couceiro

Adaptive Computation - CISUC

Abstract:

"Cardiovascular diseases (CVDs) are currently the leading cause of death in the world. However, in industrialized countries CVDs related mortality is decreasing, resulting in the increase of life expectancy and consequently the age at which people are affected or die from a cardiovascular event. This demographic aging is contributing for the rise in health care expenditures, which are struggling the actual conjuncture of the health care systems. To face both economic and social costs resulting from CVDs, the health care paradigm is changing from hospital-centered to individual-centered and from reactive to preventive care, emphasizing the need for new methodologies that can be applied to low-cost, non-invasive and portable systems, for the continuous monitoring of cardiovascular performance and status. The research presented in the present talk is based on the development of new methodologies for the analysis of the phonocardiographic and photoplethysmographic signals and the assessment of several cardiovascular parameters. The first part of the talk will be focused on the assessment of Stroke Volume from the analysis of the phonocardiographic signal. The second part will focus on the detection of motion artifacts and estimation of left ventricular ejection time (LVET) from the photoplethysmographic signal. Finally, the third part of the talk will be based on the evaluation of arterial blood pressure surrogates and their suitability for the estimation of the Baroreflex sensitivity. "

Top | Past Colloquia | Slides


18/05/2012

“Study on non-parametric methods for fast pattern recognition with emphasis on neural networks and cascade classifiers

Oswaldo Ludwig

ISR - Coimbra

Abstract:

"This presentation focuses on pattern recognition, with particular emphasis on the trade off between generalization capability and computational cost, in order to provide support for on-the-fly applications. Within this context, two types of datasets are analyzed: balanced and unbalanced. For balanced datasets it is adopted the multilayer perceptron (MLP) as classifier, since such model is a universal approximator, i.e. MLPs can fit any dataset. Therefore, rather than proposing new classifier models, the idea is to develop and analyse new training methods for MLP, in order to improve its generalization capability by exploiting four different approaches: maximization of the classification margin, redundancy, regularization, and transduction, aiming at an nonlinear classifier faster than the usual SVM with nonlinear kernel, but with similar generalization capability. Due to its decision function, the SVM with nonlinear kernel requires a high computational effort when the number of support vectors is big. For unbalanced datasets it is analysed the cascade classifier scheme, since such model can be seen as a degenerate decision tree that performs sequential rejection, keeping the processing time suitable for on-the-fly applications. Taking into account that classifier ensembles are likely to have high VC dimension, which may lead to over-fitting the training data,  generalization bounds are derived for cascade classifiers, in order to support the application of structural risk minimization (SRM) principle. It is also presented contributions on feature and data selection, due to the strong influence that data pre-processing has on pattern recognition."

Top | Past Colloquia | Slides


11/05/2012

“Sound Design Guidance as a Contribution towards the Empowerment of Indie Game Developers

Valter Alves

Information Systems - CISUC

Abstract:

"Currently, expertise in sound design in games is mostly a self-acquired proficiency held by senior sound designers. The broad community of indie game developers faces scarce know-how and restrictive budgets, when trying to integrate sound in games. The empowerment of these practitioners, to perform sound design on their own, could unfold great potential not only in terms of immediate outcomes but also regarding innovative ideas and further expansion of the body of knowledge.

We present sound design guidance as a contribution towards the empowerment of indie game developers. Also, we expose the conditions that we created for the communitarian appropriation and further development of the proposed body of knowledge."

Top | Past Colloquia | Slides


04/05/2012

“Pinpointing Anomalies in Web-based Applications by Using Application-level Monitoring

João P. Magalhães

Software and Systems Engineering - CISUC

Abstract:

"Mean-time-to-repair (MTTR) is an important factor in the assessment of the availability of a system. With applications crossing the enterprise boundaries and relying on numerous layers of technology the ability to detect, pinpoint and repair from failures leads to a higher MTTR. Some techniques like application-level profiling and runtime analysis may provide good results for detection and localization of failures, although some efforts are still necessary to reduce their performance impact. In this presentation, we will talk about our achievements for anomaly detection in web-based applications. Besides the common monitoring tools we include an application-level monitoring tool. This tool combines application-level profiling with correlation and mutual information analysis to detect and pinpoint anomalies in web-based applications. We conclude our presentation describing different adaptive and selective algorithms. With these algorithms we show how it is possible to reduce the performance impact introduced by application-level profiling, without compromise the detection and pinpointing of anomalies."

Top | Past Colloquia | Slides


27/04/2012

“Exploring Regulatory Mechanisms in Evolutionary Computation

Rui Lopes

Evolutionary and Complex Systems - CISUC

Abstract:

"Evolutionary Algorithms (EA)  approach the genotype - phenotype relationship differently than does nature, and this discrepancy is a recurrent issue among researchers. Moreover, in spite of some performance  improvements, it is a true fact that biology knowledge has advanced faster than our ability to incorporate novel biological ideas into EAs. Recently, some researchers  have started  exploring computationally new comprehensions of  the multitude of the regulatory mechanisms that are fundamental in both processes of inheritance and of development in natural systems, by trying to include those mechanisms in the EAs.  

One of the first successful  proposals was the Artificial Gene Regulatory Network (ARN) model,  by Wolfgang Banzhaf. Soon after some variants of the ARN were developed. In this presentation, one of these approaches will be described, namely the Regulatory Network Computational Device, demonstrating experimentally its capabilities. The efficacy and efficiency of this alternative are tested experimentally using  typical benchmark problems for Genetic Programming (GP) systems. Moreover, a modified factorial problem is presented, in order to investigate the use of feedback connections and the scalability of the approach. Finally, a preliminary study about the role of neutral mutations  during the evolutionary process is presented."

Top | Past Colloquia | Slides

20/04/2012

“Generation of different model of feedbacks based on software testing and rubrics to support programming learning

Dirson Campos

Cognitive and Media Systems Group - CISUC

Abstract:

"Building written feedbacks, pedagogically well-constructed, standardized and flexible enough to accommodate students who may be in different stages and learning curves is a complex and laborious task. The Online Judge tools, such as Mooshak are the current state of the art on automatic assessment with generation of  feedback automatic, but the result of feedback is still educationally quite limited. Programming errors, especially logical ones, can be used as a consistent metric for assessing learning. The research done demanded for an innovative way to set the written content of different models of feedback and the creation of an efficient method for the discovery and mapping of logical errors based on software testing and rubrics. The results obtained so far are presented and analyzed."

Top | Past Colloquia | Slides

13/04/2012

“Aid to Diagnosis of Cardiovascular Diseases by Heart sound analysis

Dinesh Kumar

Adaptive Computation Group - CISUC

Abstract:

"Cardiovascular diseases (CVDs) are the most deadly diseases worldwide leaving behind diabetes and cancer. Being CVDs connected to ageing, the focus of attention has been shifted from curative health care to preventive health care in order to reduce number of hospital visits and enable home care. A computer-based auscultation opens new possibilities in health management by enabling assessment of the mechanical status of the heart using inexpensive and non-invasive methods. A computer based heart sound analysis techniques facilitate physicians to diagnose many cardiac disorders, such as valvular dysfunction and congestive heart failure, as well as to measure several cardiac parameters such as pulmonary arterial pressure, blood pressure, pre-ejection period, etc.  Heart sounds are mainly composed of two main sounds (S1 and S2) however in case of valvular disorders and improper blood flow abnormal sound (heart murmur) is produced.

Heart sound analysis consists of three main tasks: identification of non-cardiac sounds which are unavoidably mixed with the heart sound during auscultation, segmentation of the heart sound in localization of the main sound components, and finally classification of the abnormal heart sounds. Noise detection technique uses template based approach which first utilizes periodic nature of heart sound to identify a reference and then a template matching of the power spectrum of the reference sound and the rest of the sound to detect noisy sounds. For the segmentation problem, two algorithms will be presented for normal and abnormal sounds. The Energy envelope approach for segmentation of the normal heart sounds and wavelet decomposition for abnormal heart sounds will be introduced. Abnormal heart sounds or heart murmurs are assessed based on chaos analysis. Since heart murmurs originate by numerous anomalies in the heart which shows different characteristics, temporal, frequency and nonlinear features are extracted in order to classify heart murmur using a supervised classifier. These methods have been tested on the database prepared with the help of University Hospital of Coimbra."

Top | Past Colloquia | Slides

30/03/2012

“On The Saliency and Stability of Local Features

Pedro Martins

Adaptive Computation Group - CISUC

Abstract:

"Local feature detection is a prominent and prolific research topic for the computer vision community, as it plays a crucial role in many vision tasks. The overall performance of such tasks is substantially determined by the effectiveness of the local feature detector, regardless of their complexity or application.   In the first part of this talk, we will overview local feature detection. We will give a special emphasis to affine covariant detectors due to its use in diverse applications, such as wide baseline matching, camera calibration, object recognition, and symmetry detection. In the second part of the talk, two novel algorithms for local feature detection will be presented and discussed. The first one provides affine covariant regions, which are the result of performing a feature-driven detection of Maximally Stable Extremal Regions. The second algorithm follows an information-theoretic approach to detect salient patterns. We will show and compare the results of the proposed solutions to the ones of state-of-the art methods, either in terms of (relative and absolute) repeatability or completeness."

Top | Past Colloquia | Slides

23/03/2012

“The Devils Behind Web Application Vulnerabilities

Nuno Antunes

Software and Systems Engineering - CISUC

Abstract:

"Web Applications are frequently deployed with critical security bugs that can be maliciously exploited. Avoiding such vulnerabilities depends on the best practices and tools applied during the implementation, testing and deployment phases of the software development cycle. However, many times those practices are disregarded, as developers are frequently not specialized in security and face hard time-to-deploy constraints. Furthermore, the poor efficiency of existing automatic vulnerability detection and mitigation tools opens the door for the deployment of insecure web applications. Realizing the full benefits of secure coding and the limitations of existing tools requires rethinking the way we build web applications. This presentation intends to discuss the devils behind the security of such applications."

Top | Past Colloquia | Slides

16/03/2012

“Electrical Signal Disaggregation using Decomposition Methods

Marisa Figueiredo

Cognitive and Media Systems - CISUC

Abstract:

Electrical signal disaggregation consists on the separation of aggregated electrical signal into its individual  loads.  Apart  from  the  importance  of  monitoring  the  energy  consumption  this information is also crucial for health care applications, in-home activity modelling, assisted living, and  home  automation.  Studies  have  shown  that  individual  loads  can  be  detected  (and disaggregated)  from sampling  the power at one single point  (e.g.  the electric service entrance for the house) by using a non-intrusive load monitoring (NILM) approach. This research aims at contributing  to  the  electrical  signal  disaggregation  task  with  cutting-edge  decomposition methods.  First,  we  focus  on  feature  extraction  and  pattern  recognition  tasks.  Second,  we endeavour  to  obtain  a  signal  denoising  method  whose  rationale  is  to  remove  non-relevant information  that  should  be  considered  as  noise  in  the  electrical  signal.  Finally, we  tackle  the problem  of  the  electrical  signal  disaggregation  task  as  a  source  separation  problem,  using  a non-negative sparse coding approach. Preliminary results will be shown which pave the way for future developments. 

Top | Past Colloquia | Slides

09/03/2012

“Autonomic nervous modulation of cardiorespiratory coupling during sleep

Ramona Cabiddu

Adaptive Computation Group - CISUC

Abstract:

Cardiac and respiratory rhythms have been long known to interact with each other. Although the mechanisms underlying the cardiorespiratory interaction and its physiological significance have not been elucidated yet, there is clinical evidence that reduced synchronization is a prognostic indicator for cardiac mortality. In recent years, a growing interest has been manifested on the cardiorespiratory coordination during sleep, also given the fact that many sleep disorders, including insomnia and sleep apnea, have been proved to be associated with cardiopulmonary dysfunctions. In this presentation sophisticated biosignal processing methods, including univariate and bivariate spectral analysis, will be introduced, which were successfully applied to cardiac and respiratory variability signals to assess the effects of the autonomic nervous modulation during sleep. Thanks to their non-invasivity and relatively easy implementation, these methods could represent a suitable tool to accurately investigate autonomic cardiac regulation, respiratory variations and cardiorespiratory coupling during sleep.

Top | Past Colloquia | Slides

02/03/2012

“Complexity and Emergence in Societies of Agents

Tiago Baptista

Evolutionary and Complex Systems Group - CISUC

Abstract:

Throughout the last decades, Darwin’s theory of natural selection has fueled a vast amount of research in the field of computer science, and more specifically in artificial intelligence. The majority of this work has focused on artificial selection, rather than on natural selection. In parallel, a growing interest in complexity science brought new modelling paradigms into the scene, with a focus on bottom-up approaches. By combining ideas from complex systems research and artificial life, we present a multi-agent simulation model for open-ended evolution, and a software framework (BitBang) that implements it. We also present a rule list based algorithm implemented for the brain component of the agents. Using this framework, several simulation environments were created, some of which are presented here, and experimental results from the simulations are analyzed.

Top | Past Colloquia | Slides

24/02/2012

Semantic Enrichment of Places: Understanding the Meaning of Public Places from Natural Language Texts

Ana Oliveira Alves

Cognitive and Media Systems Group - CISUC

Abstract:

In this presentation, we present our approach to the challenge of assigning semantic annotations to places, what we call Semantic Enrichment of Places. These annotations are automatically extracted by applying natural language processing and information extraction techniques that have been thoroughly applied and tested using the World Wide Web as the primary source. Here, we are particularly focused on extracting information that allows an external system to distinguish one place from other places that are spatially or conceptually close. This is because the meaning of a place is a function of its most salient features, present in the textual descriptions found in online resources about that place. In the situation under investigation, places correspond to Points Of Interest (POIs), as these are abundant on the Web. By definition, a POI is a place with meaning to someone and, if it is available online, it is likely that that person's interest is shared by many people. In this approach, the Web is rst crawled to obtain a large number of POIs and then each of them is analyzed in order to obtain their individual Semantic Index : the set of words that best describe them. Besides analyzing POIs, we also propose the application of such an approach in several different contexts and we integrate these contexts in a multi-faceted view of place.

Top | Past Colloquia | Slides

17/02/2012

“EEG signal processing for epileptic seizure prediction” 

Bruno Miguel Direito

Adaptive Computation Group - CISUC

Abstract:

The computational intelligence methods are expected to give a fundamental contribution to the understanding of the brain, and ultimately, to the development of more effective treatments for epilepsy and other brain disorders.

Epilepsy is characterized by the occurrence of unforeseeable and uncontrollable seizures, which jeopardize the quality of life of the patients. Any preventive action (that would involve the prediction of seizures) is expected to improve the daily life of those patients.

The research presented in this talk intends to improve the understanding about the period before epileptic seizures, also known as the preictal period, using electroencephalography (EEG) data.

The first part of the talk is based on the development and optimization of seizure prediction algorithms and the development of a computational framework called EPILAB. Several topics are addressed such as Preprocessing, Feature Extraction, Feature Selection, Dimensionality reduction, and Classification (neural networks, support vector machines).

The second part of the talk is focused on the characterization of spatio-temporal patterns of multichannel EEG data. A novel methodology is presented to identify the 'preictal' spatio temporal patterns. These concepts are further explored using multiway decomposition. Using the parallel factor analysis (PARAFAC) model on epileptic EEG data, it is hypothesized that a spatio-frequency-temporal signature exists in the preictal period.

The results suggest that there are specific EEG activity changes prior to seizures that are believed not to be random.

Top | Past Colloquia | Slides