Nordic AI Meet 2021

Norwegian Artificial Intelligence Research Consortium, nora.ai

The Nordic AI young researcher symposium (Nordic AI Meet) represents an important step forward in helping and providing necessary knowledge exchange about AI and its applications in the Nordics. This is particularly important and useful for meeting important challenges in the public sector, industry or civil society regarding the use of AI.


The virtual poster session represents work from young researchers all over the Nordics covering variety of topics linked with AI/ML.  


More info: https://nordicaimeet.com

Filter displayed posters (107 keywords)

Artificial Intelligence (3) Learning Theory (3) Convolutional Neural Networks (2) Deep learning (2) Human-Computer Interaction (2) Machine Learning (2) Multiarmed Bandits (2) deep learning (2) show more... (health)care (1) 5G New Radio (1) Affective Computing (1) Alzheimer’s disease; Progression; Prediction; Machine learning (1) Artificial Intelligence (AI) (1) Automation (1) Bias (1) Big Data (1) Binary Cross-entropy (1) CNNs (1) Cardiology (1) Collision avoidance (1) Complexity Analysis (1) Convolutional neural networks (1) Cryosat-2 (1) Deep Learning (1) Detectron2 (1) Domain transformation (1) ECG (1) Engagement (1) Environmental Noise (1) Ethics (1) Explainable AI (1) F1-loss (1) Fairness (1) Fast acquisition (1) GLAD (1) Gamification (1) Glioblastoma Grading (1) Grad-CAM (1) Greenland ice sheet (1) Histology (1) Human Activity Recognition (1) Human-AI interaction (1) Industry 4.0 (1) Knowledge graph (1) LSTMs. (1) Localization (1) Long Term (1) MODIS (1) Magnetic resonance spectroscopy (1) Mask R-CNN (1) Multi-agent systems (1) Music (1) Noctilucent cloud (NLC) (1) Noise Monitoring (1) Object Segmentation (1) Open-Source (1) PAC-Bayes (1) Participatory Design (1) Path finding (1) Pianoforte (1) Public Sector (1) RADD (1) Reinforcement Learning (1) Reinforcement learning (1) Social Robots (1) Sound Event Detection (1) Surveys (1) The Elder problem (1) Transparency (1) Trustworthy AI (1) Uncertainty (1) Unsupervised Learning (1) Weighted Majority Vote (1) Wireless Acoustic Sensor Network (1) XeThru UWB Sensor (1) about- (1) and for users (1) audio modeling (1) audio synthesis (1) automated decision-making. (1) classification (1) comment moderation (1) dark entity (1) data bias (1) deforestation (1) digital interventions (1) discrimination (1) earth observation (1) ethics (1) gender bias (1) human-centered AI (1) mHealth (1) melt dynamics (1) model interpretability (1) open-event detection (1) proactive health (1) real-time (1) remote sensing (1) responsible AI (1) rights (1) robots (1) satellite imagery (1) semantic segmentation (1) sentinel-2 (1) studies with- (1) surface mass balance (1) topic modelling (1)
Show Posters:

User Engagement in Gamified Human-Computer Interaction

Bahram Salamatravandi

Vote for this poster
Abstract
Engagement is a very wide theme of research within HCI that includes several fields of study and has been used to understand user experience. Machines with the ability of tracking, interpreting, and reasoning emotions can interact efficiently with users on any task, as a competitor or cooperator agent. One of the main goals of such an interaction would be engaging users with different strategies. Modulating the complexity of the task for users (either with direct interfering or aiding the user with the task), and social feedback to motivate or challenge users are some possible engagement approaches. Several potential applications of user engagement are enhancing student performance, improving the driving experience, increasing customer satisfaction, and healthcare systems. Our study aims to establish engagement approaches to engage users in interacting with social robots/agents. To study engagement, we established an interaction platform with a social robot which we used to interact with the human on a gamified task.
Presented by
Bahram Salamatravandi <bahramsalamat@ait.gu.se>
Institution
University of Gothenburg
Keywords
Human-Computer Interaction, Engagement, Social Robots, Gamification, Affective Computing

Physics-informed Neural Network for Viscoelastic Flows

Birane Kane

Vote for this poster
Abstract
Modeling and simulation of complex flow problems has been a subject of intense research in the last years with important industrial applications such as Viscoelastic-Polymer Flooding in EOR processes. Thus, our research focuses on providing a flexible and state-of-the-art Deep learning framework where we explicitly embed physical laws aiming at describing viscoelastic fluid flow (e.g., Oldroyd/FENEP equations) to constrain neural networks for training a reliable model. The effectiveness of the = proposed framework is demonstrated through some benchmark tests. To our knowledge, this is the first time the concept of deep learning is applied to viscoelastic fluid flow modelling. The implementation of our model is based on the TensorFlow library.
Presented by
Birane Kane
Institution
NORCE
Keywords

ComparingBinaryCross-entropyand F1-loss in Multi-labelECG classification

Bjørn-Jostein Singstad, Eraraya Morenzo Muten, Pål Haugar Brekke

Vote for this poster
Abstract
Cardiovascular diseases (CVDs) are one of the leading causes of death globally, taking an estimated 17.9 million lives each year according to numbers from WHO. Early detection of CVDs will have a great impact on reducing the severity of the disease and one of the most used diagnostic tools for detecting heart diseases is the electrocardiogram (ECG). Rough injury patterns in the ECG are already known prognostic markers, but it is likely that artificial intelligence (AI) will be able to detect more subtle changes and damage patterns in the ECG.

It has already been proven that AI can outperform physicians and cardiologists on certain diagnoses based on the ECG, like detecting silent atrial fibrillation. But to be able to replace today’s clinically used, and rule-based ECG interpretation algorithms, the AI-based ECG interpretation algorithm should be able to classify many diagnoses, where multiple diagnoses could be true at the same time. In this study, we train and compare two Convolutional Neural Networks(CNN), one using Binary Cross-entropy (BCE) and one using soft F1-loss.

To train the model we used a dataset containing 88 253 open access 12-lead ECGs with 30 different diagnoses used as ground truth for our supervised model. We trained and validated the CNN models using 10-fold cross-validation and scored the model on the validation split using accuracy, F1-score, Area under the receiver operating characteristic curve (AUROC), and a particular metric developed specifically for the Physionet challenge.

The model, trained using BCE, got an AUROC score of 0.92 ± 0.005 and an accuracy score of 0.97 ± 0.001, while the model trained using F1-loss got an AUROC score of 0.84 ± 0.016 and an accuracy score of 0.95 ± 0.01. However, the BCE model got an F1-score = 0.35± 0.02 and a PhysioNet Challenge score = 0.40 ± 0.02 while the model trained using F1-loss got an F1-score of 0.43 ± 0.02 and a PhysioNet Challenge score of 0.54 ± 0.01.

This implies that the model using BCE loss has the best abilities to predict true positives and true negatives, while the model using F1-loss is better when it comes to avoiding false negatives and false positives. Furthermore, this may indicate that the model using BCE loss is fitted to the given distribution of diagnoses in the used dataset, while the model using F1-loss may generalize better to new data and unknown distributions.
Presented by
Bjørn-Jostein Singstad, Eraraya Morenzo Muten
Institution
Oslo University Hospital, Institut Teknologi Bandung
Keywords
Cardiology, ECG, Convolutional Neural Networks, F1-loss, Binary Cross-entropy

GENDER BIAS IN AI: Perspectives of AI Practitioners

Cathrine Bui & Lara Okafor

Vote for this poster
Abstract
Background: AI systems increase in popularity and are widely implemented in many areas. Media and literature have reported numerous incidents of discriminating AI systems. Literature has identified several causes and solutions to gender bias in AI, and many institutions have published ethics guidelines. However, previous research has not studied the perspectives and practices of practitioners in AI.

Aim: This project explores what perspectives practitioners in AI in Norway have on gender bias in AI by investigating their understanding of technology; how gender bias enters AI systems; and what practices they have in place to detect and address gender bias in AI. Method: Qualitative multiple case studies were conducted. This study interviewed 13 practitioners in the AI field in Norway. Thematic analysis was used to analyze the interviews.

Findings: Practitioners have implemented few practices, most do not use any ethics guidelines, and they delegate responsibilities to other entities. The informants could only identify a few of the entry points of gender bias mentioned by literature, such as biased data, human bias, and a lack of diverse perspectives. The informants with at least one marginalized identity had more knowledge and practices to address gender bias in AI. They were able to identify more systemic causes and higher-impact levers of intervention.

Conclusion: AI practitioners have inherited assumptions and beliefs from predecessors in the AI field on how distancing oneself from one's work achieves neutral objectivity. These beliefs have a significant influence on practitioners' understanding of technology, and as a result, few ethics practices are in place. These assumptions conflate their grasp of what causes gender bias in AI into a technical problem because they underestimate the effects of power. The practitioners see biased data as the main cause, but data is never neutral because no dataset is equally fair for everyone. The practitioners' belief that there exists a form of fairness that will always be correct for everyone at all times without considering the context enables biases to enter AI systems. The AI field needs to examine what technical heritage and taken-for-granted beliefs negatively impact research and practices on gender bias in AI. This study recommends a paradigm shift in practitioners from imagined objectivity to a critical, intersectional perspective that empowers, includes, and creates justice for disadvantaged groups. Inclusion of marginalized perspectives is crucial, and hiring practices should change to increase diversity by training disadvantaged groups in AI.
Presented by
Cathrine Bui <bui.cathr@gmail.com>
Institution
University of Oslo
Keywords
Ethics, responsible AI, gender bias, discrimination, human-centered AI, data bias

An Algorithm for Stochastic and Adversarial Bandits with Switching Costs

Chloé Rouyer, Yevgeny Seldin, Nicolò Cesa-Bianchi

Vote for this poster
Abstract
Multi-armed bandits is a widely studied framework used to work on the exploration versus exploitation trade-off that we can find in many different decision problems, that vary from ad recommendation systems to medical trials designs. Simply put, we consider a learner that has to repeatedly select one arm in a list of $K$ arms, that observes and suffers a loss associated with that arm, and then uses the observed data in order to minimize her cumulative loss up to a time horizon $T$. The performance of the learner is measured in terms of regret, which compares the cumulative loss of the learner with the cumulative loss of the arm that has the smallest cumulative loss on $T$ rounds. In the past few years, a particular focus has been given to derive algorithms that adapt to different types of data without being aware of the environment resistance [1]. We studied a variation of the stochastic and adversarial multi-armed bandits that include switching costs [2], meaning that the learner pays a price $\lambda$ every time she switches the arm being played. We proposed and analyzed an algorithm that is based on the adaptation of the Tsallis-INF algorithm [1] and requires no prior knowledge of the regime or time horizon. In the oblivious adversarial setting, it achieves the minimax optimal regret bound of $O((\lambda K)^{1/3}T^{2/3}+\sqrt{KT})$. In the stochastically constrained adversarial regime, which includes the stochastic regime as a special case, it achieves a regret bound of $O(((\lambda K)^{2/3}T^{1/3}+ \ln T)\sum_{i≠i^∗}\Delta_{i−1})$, where $\Delta_{i}$ are the sub-optimality gaps and $i^$ is a unique optimal arm. In the special case of $\lambda= 0$ (no switching costs), this bound is also minimax optimal within constants. We also explored variants of the problem, where switching cost is allowed to change over time.
Presented by
Chloé Rouyer
Institution
University of Copenhagen
Keywords
Learning Theory, Multiarmed Bandits

Towards an Inclusive Framework for AI-based Care Robots

*Saplacan, Diana; **Martinez, Santiago; *Tørresen, Jim.

Vote for this poster
Abstract
Recent research on 36 prominent documents on Artificial Intelligence (AI) principles shows that human rights are one of the most critical aspects of Principled AI. These were also addressed in several official documents from a regulative perspective. One of the human rights is the right to health(care). However, in AI-based applications, e.g., care robots, introduced as part of healthcare services and products, this right is challenged due to privacy, safety, security issues, but also due to accessibility and usability issues, especially if vulnerable users shall interact with these platforms. Thus, we propose discussing how the concept of care gets new valences with the AI-based care robots, from both theoretical and practical perspectives. Ethical aspects are discussed in the light of the integration of care robots in home- and healthcare services. An inclusive theoretical and practical framework is proposed to guide designers, engineers, and roboticists’ design choices in the design of AI applications, such as care robots. The motivation behind this approach lies within the idea of supporting human- autonomy and rights, including the right to health(care). Finally, this work aims to equip the audience with concrete examples of how human- autonomy and rights can be operationalized in practice, beyond their technical- or regulative application.

References:

[1] J. Fjeld, N. Achten, H. Hilligoss, A. Nagy, and M. Srikumar, “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Princi- ples for AI,” Social Science Research Network, Rochester, NY, SSRN Scholarly Paper ID 3518482, Jan. 2020. doi: 10.2139/ssrn.3518482.

[2] WHO, “The Right to Health,” Office of the United Nations High Commissioner for Human Rights, Fact Sheet No. 31. [Online]. Available: https://www.ohchr.org/ documents/publications/factsheet31.pdf

[3] D. Saplacan, W. Khaksar, and J. Torresen, “On Ethical Challenges Raised by Care Robots: A Review,” in Proceedings of The IEEE, 20th International Conference in Advanced Robotics and Its Social Impacts (ARSO), Japan/Virtual, 2021, p. 8. DOI: 10.1109/ARSO51874.2021.9542844

[4] V. Dignum et al., “Ethics by Design: Necessity or Curse?,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New York, NY, USA, Dec. 2018, pp. 60–66. doi: 10.1145/3278721.3278745.

[5] N. A. Smuha, “Beyond a Human Rights-based approach to AI Governance: Promise, Pitfalls, Plea,” Social Science Research Network, Rochester, NY, SSRN Scholarly Paper ID 3543112, Feb. 2020. doi: 10.2139/ssrn.3543112.
Presented by
Diana Saplacan
Institution
*Robotics and Intelligent Systems Research Group, Department of Informatics, University of Oslo; **eHealth Research Group, Faculty of Health and Sport Sciences, University of Agder;
Keywords
Artificial Intelligence (AI), ethics, rights, (health)care, robots, studies with-, about-, and for users

Not All Comments are Equal: Insights into Comment Moderation from a Topic-Aware Model

Elaine Zosa, Ravi Shekhar, Mladen Karan, Matthew Purver

Vote for this poster
Abstract
Moderation of reader comments is a significant problem for online news platforms. Here, we experiment with models for automatic moderation, using a dataset of comments from a popular Croatian newspaper. Our analysis shows that while comments that violate the moderation rules mostly share common linguistic and thematic features, their content varies across the different sections of the newspaper. We therefore make our models topic-aware, incorporating semantic features from a topic model into the classification decision. Our results show that topic information improves the performance of the model, increases its confidence in correct outputs, and helps us understand the model’s outputs.
Presented by
Elaine Zosa
Institution
University of Helsinki, Queen Mary University of London, Jozef Stefan Institute
Keywords
topic modelling, comment moderation, model interpretability

Domain-Transformation of MRI-derived Time Series with Deep Learning

Erin B. Bjørkeli, John Terje Geitung, Morteza Esmaeili

Vote for this poster
Abstract
In this project, we have developed a deep neural network that can perform a domain transformation on the raw data acquired from magnetic resonance spectroscopic imaging. The network was able to extract low dimensional features of the spectra and satisfyingly reconstruct the frequency domain of the time series.
Presented by
Erin Beate Bjørkeli <erinbb@online.no>
Institution
Akershus University Hospital
Keywords
Magnetic resonance spectroscopy, Domain transformation, Fast acquisition, Deep learning

Recognition of Human Activities using UWB Radar and Deep Learning

Farzan M. Noori

Vote for this poster
Abstract
The population of older adults is increasing every day, and the trend is expected to continue [1]. With the help of advanced assistive technologies, we can provide them better healthcare. In this work, we provide a novel sensing approach based on an Ultra-wideband (UWB) sensor to recognize human activities and forecast future events. Previously, researchers have focused on wearable or video sensors to detect a person’s behaviour [2, 3]. However, it is challenging for older people to wear devices 24/7. Similarly, vision-based sensors always carry privacy concerns. In contrast, a non-contact ambient sensor with no such privacy issues is the XeThru ultra-wideband (UWB) radar. We collected the data using multiple modalities, such as UWB, depth images, thermal images, and actigraphy device, i.e. to collect heart rate (HR) as a ground truth. The dataset comprised of each participant sitting on the sofa as a normal situation. Afterwards, we instructed the participants to do some exercise for the sake of increasing HR. When the HR increased by more than 140 BPM, the users lied down on the floor in front of sensors until their HR reached a normal level. In this research, we classify normal vs abnormal situations using CNNs and LSTMs. Accuracy, precision, and recall are used as performance measures. We got 95% and 98% accuracies using CNNs and LSTMs, respectively. Furthermore, seven classes were introduced based on HR levels. We got promising results with LSTMs. It was the first step towards classifying activities using UWB radar. In the future, we are planning to predict the future HR based only on UWB data, as it would be challenging for older people to wear actigraphy devices. Moreover, we will include other modalities which we did not include in our preliminary analysis.
Presented by
Farzan M. Noori
Institution
Department of Informatics, University of Oslo
Keywords
Human Activity Recognition, CNNs, XeThru UWB Sensor, LSTMs.

5G NR-based Environment Detection for Seamless Localization utilizing CNN

Ghazaleh Kia, Jukka Talvitie, Laura Ruotsalainen

Vote for this poster
Abstract
Artificial Intelligence (AI) plays an important role in spatiotemporal data analysis. Spatiotemporal data, which are used frequently in the localization domain, can provide accurate position solutions for autonomous vehicles, drones, and other mobile objects. However, to gain the greatest possible advantage from the spatiotemporal data, a powerful tool such as AI is required. Machine learning methods can estimate the desired parameters for localization purposes by being trained over a given set of data. In this research, we aim to utilize the Convolutional Neural Network (CNN) to detect the environment in which the localization is taking place. CNN is a convenient candidate to solve complex problems and detect the patterns available in the structure of the data. To detect the environment, we are interested in utilizing 5G New Radio Channel State Information (CSI) data. CSI will be continuously exchanged between the base station and the user equipment. Furthermore, CSI characterizes the multipath channel between the transmitter and the receiver and consequently is a well-qualified applicant to detect the change in the environment. Being aware of the environment while localizing a mobile object is an important matter. A mobile object, which can be a pedestrian or a vehicle, would change its location in several environments, like from a park to a building or from a street to parking. The localization technique must be aware of these changes. This awareness supports the seamless localization when the environment is changing from indoor to outdoor or vice versa since the sensors and settings to find the location might vary in different environments. The outcome of this research will support seamless localization projects in the future.
Presented by
Ghazaleh Kia <ghazaleh.kia@helsinki.fi>
Institution
University of Helsinki
Keywords
5G New Radio, Artificial Intelligence, Convolutional Neural Networks, Localization
Chat with Presenter
Available Nov 1st, 2021 at 13:30-14:30 (CEST)
Join the Discussion Watch Presentation

Unsupervised Learning of Fish as a Novel Class in Detectron2

I-Hao Chen, Nabil Belbachir

Vote for this poster
Abstract
Unsupervised Learning of Fish as a Novel Class in Detectron2 I-Hao Chen, Nabil Belbachir

Keywords: Unsupervised Learning, Detectron2, Mask R-CNN, Object Segmentation. In this work, we introduce a simple approach to add a novel class, e.g., fish (Atlantic salmon) using the Mask R-CNN [1] algorithm without the need of human annotations. We thereby solve the problem of labelling underwater fish manually, which may be cost- and time-intensive. The main application is to use our algorithm on automated drones like ARV-i [2] in semi-static environments like aquaculture to track fish, estimate their weight or count fish, but other cases are easily adapted since human parameter tuning is minimal. We use the Detectron2 [3] of the Facebook AI Research as implementation of the Mask R-CNN. Our method (see Figure 1) is a sequence of algorithms: Firstly, we split the training data into copies of different sizes. Then the Detectron2 outputs various instance segmentations as inference with low certainty on the data. The different dimensions of the images increase the class variance and detection range – therefore providing more wanted segmentations. Images without segmentations are saved as background files. We curate the data by resizing the segmentations to the original dimensionality of the image. The algorithm now unifies all the potentially overlapping segmentations belonging to one object if they score a high enough IoU (Intersection over Union) value. Afterwards, we apply a series of filters in cascade that check the segmentations for extent, solidity, equivalent diameter, mean value and aspect ratio. That purifies the dataset of misdetections or other foreign objects. These hyperparameters must be tuned by a human as pre-processing step. Finally, the remaining segmentations are pasted randomly on background images. After training on these images, we can input new (or the initial training) data with the novel object and the Detectron2 detects them confidently as novel class (e.g., fish). Current restrictions are that the algorithm stands and fails with the inference on high uncertainty (>95%) and that it needs data where at least some novel objects are separated. We strive to implement a tracking algorithm as well to reduce the detection errors due to occlusions by exploiting a spatiotemporal analysis.

Acknowledgements Acknowledgement to the company SubC3D AS for providing the original video/images taken with an ARV-i [2]. References 1. He K, Gkioxari G, Dollar P et al. (2020) Mask R-CNN. IEEE Trans Pattern Anal Mach Intell 42:386–397. 2. Transmark Subsea (2021) Introducing ARV-i. https://www.transmarksubsea.com/introducing-arv-i/. Accessed 17 Aug 2021 3. Wu Y, Kirillov A, Massa F et al. (2019) Detectron2. https://github.com/facebookresearch/detectron2. Accessed 17 Aug 2021
Presented by
I-Hao Chen
Institution
Norce Norwegian Research Centre AS
Keywords
Unsupervised Learning, Detectron2, Mask R-CNN, Object Segmentation

Automated Noise Monitoring with Machine Learning

Jon Nordby, Erik Sjølund, Ole Johan Aspestrand Bjerke, Fabian Nemazi

Vote for this poster
Abstract
Noise is a large problem in modern urban society. In Europe over 100 million people are exposed to noise above the target levels, which leads to sleep disturbances and increased risk of cardiovascular disease. Improvements in the capabilities and cost of embedded systems and Internet of Things has made it easier to measure noise in terms of sound level over longer periods of time and at more locations. However, the sound level alone tells us very little about the causes of noise, which must be understood in order to plan and validate remedies.

The use of Machine Learning is starting to make it possible to automatically detect and classify audio, and there is a growing body of existing research in relevant topics such as Sound Event Detection, Environmental Sound Classification, Acoustic Scene Classification and Universal Source Separation. However, we find that most existing work does not consider the relationship between the noise magnitude/severity (sound level) with noise sources (classification).

Our ongoing research aims to find practical solutions for automatically determining the source of measured noise, by combining sound detection and classification techniques with with sound level measurements according to current noise regulations and acoustical engineering practice.

Our contributions so far have shown that it is feasible to do on-sensor classification of environmental sound on low-cost microcontrollers, that Sound Event Detection (SED) at a known source can be used to create logbooks of noisy activity, and that doing SED at both source and receiver allows to determine the source of noise experienced at receiver.

Current work focuses on new task formulations for Machine Learning that incorporate the requirements of acoustic modelling and noise regulations, including Noise Detection \& Classification. We hope that the formalization of these tasks will enable more research and increased relevance for understanding acoustical noise using Machine Learning.

Our primary method of research involves case-studies and demonstrator projects in the real-world, and we invite parties interested in noise, acoustics and machine learning to collaborate with us.
Presented by
Jon Nordby <jon@soundsensing.no>
Institution
Soundsensing AS
Keywords
Environmental Noise,Noise Monitoring,Sound Event Detection,Wireless Acoustic Sensor Network

Design of trustworthy and inclusive AI services in the public sector

Karolina Drobotowicz

Vote for this poster
Abstract
In May 2021, European Commission published the new AI regulations proposal (AI Act) [1]. This brought a sense of urgency for the developers of AI services, within industry, governments and public sectors. In recent discussions with companies, AIGA [2] and the Finnish government's public ICT representatives, I recognize that more than ever a close collaboration between the policy, technology and societal experts is needed. Furthermore, it is important to include civil society in the dialogue and deliberations [3], who, as shown in my recent paper [4], have their own set of requirements for trustworthy public AI services.

My proposed research focuses on how trustworthy and inclusive AI can be effectively implemented in the public sector: understanding the technological needs, challenges and regulations for devising trustworthy AI services, while developing tools, methods and critical assessment of outcomes. It builds on research in AI ethics including transparency, inclusion, accountability, auditability, and explainable AI, while engaging aspects of HCI and Human-AI interaction.

I am using a multidisciplinary and participatory approach in this research. On one side, I am collaborating with key stakeholders, such as public sector representatives (eg. from the City of Helsinki), public services designers and developers, policy and legal experts. I plan to conduct qualitative interviews, focus group sessions, and if possible, ethnographic and case studies with this group. The work has already started as the study on AI Act implications for educational and public services. On another side, I am including civil society in the study, such as regular citizens and representatives of vulnerable communities that might be affected by the public AI services. With this group, I have been conducting interviews, workshops and focus groups. Finally, I intend to gather both groups to perform the co-design sessions. As the result, I intend to publish: 1) frameworks of the successful participatory method for public AI services and 2) exemplar trustworthy and inclusive public AI service interface.

In summary, I aim to provide realistic solutions and recommendations for making public AI services trustworthy and beneficial to society, thereby under-represented groups. This could contribute, first, to the empowerment of citizens by acknowledging their experiential expertise and providing them ways of participation in developing public AI services. Second, to the successful implementation of AI services by the public services providers. Third, to understanding the implications of and preparations for the AI Act.

References: [1] European Commission, “Proposal for a Regulation laying down harmonised rules on artificial intelligence, ”https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence, accessed 28.05.2021 [2] Artificial Intelligence Governance and Auditing (AIGA) project, https://ai-governance.eu/, accessed 28.05.2021 [3] Young, M., Magassa, L. & Friedman, B. Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics Inf Technol 21, 89–103 (2019) [4] Drobotowicz K., Kauppinen M., Kujala S. (2021). Trustworthy AI Services in the Public Sector: What Are Citizens Saying About It?. In: Requirements Engineering: Foundation for Software Quality, REFSQ’21
Presented by
Karolina Drobotowicz <drobotowicz.karolina@aalto.fi>
Institution
Aalto University, Department of Computer Science
Keywords
Artificial Intelligence, Public Sector, Transparency, Participatory Design, Human-AI interaction, Human-Computer Interaction, Trustworthy AI
Chat with Presenter
Available November 3rd 11:00-12:00
Join the Discussion

Predicting progression & cognitive decline in amyloid-positivepatients with Alzheimer’s disease

Hákon Valur Dansson, Lena Stempfle, Hildur Egilsdóttir, Alexander Schliep, Erik Portelius, Kaj Blennow, Henrik Zetterberg, Fredrik D. Johansson

Vote for this poster
Abstract
In Alzheimer’s disease, amyloid-β(Aβ) peptides increase amyloid levels in CSF in the brain, which are a key pathological hallmark of the disease. However, increased CSF amyloid levels may also be present in cognitively unimpaired elderly individuals. Therefore, it is of great value to explain the variance in disease progression among patients with Aβpathology.

A cohort of n=2293 participants, of whom n=749 were Aβpositive, was selected from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database to study heterogeneity in disease progression for individuals with Aβpathology. The analysis used baseline clinical variables including demographics, genetic markers, and neuropsychological data to predict how the cognitive ability and AD diagnosis of subjects progressed using statistical models and machine learning. Due to the relatively low prevalence of Aβpathology, models that fit only to Aβ-positive subjects were compared to models that fit an extended cohort including subjects without established Aβpathology, adjusting for covariate differences between the cohorts.

Aβpathology status was determined based on the Aβ42/Aβ40ratio. The best predictive model of change in cognitive test scores for Aβ-positive subjects at the two-year follow-up achievedanR2score of 0.388 while the best model predicting adverse changes in diagnosis achieved a weightedF1score of 0.791. Conforming to expectations, Aβ-positive subjects declined faster on average than those without Aβpathology, but the specific level of CSF Aβwas not predictive of progression rate. When predicting cognitive score change four years after baseline, the best model achieved anR2score of 0.325 and it was found that fitting models to the extended cohort improved performance. Moreover, using all clinical variables outperformed the best model based only on a suite of cognitive test scores which achieved anR2score of 0.228.

Our analysis shows that CSF levels of Aβare not strong predictors of the rate of cognitive decline in Aβ-positive subjects when adjusting for other variables. Baseline assessments of cognitive function account for the majority of variance explained in the prediction of two-year decline but are insufficient for achieving optimal results in longer-term predictions. Predicting changes both in cognitive test scores and in diagnosis provides multiple perspectives of the progression of potential AD subjects.
Presented by
Lena Stempfle
Institution
Chalmers University of Technology, Sweden
Keywords
Alzheimer’s disease; Progression; Prediction; Machine learning

Detection of events in real-time news streams with dark entities using knowledge graphs

Marc Gallofré Ocaña, Andreas L. Opdahl

Vote for this poster
Abstract
Newsrooms compete in a fierce race to become the first to publish news stories about current events and developments, where delays imply economic losses [1]. News stories are no longer static and immutable newspaper articles, rather they are adapted to the audience and constantly updated [2]. Many projects exploit Linked Open Data knowledge bases such as DBpedia and Wikidata to provide background information, enrich and analyze the news content. However, a common problem is that the emerging entities in news are usually missing in knowledge bases, generating what is known as dark entities [3]. In our research, we investigate how to deal with dark entities for open-event detection in real-time RDF streams representing news articles and pre-news such as Twitter messages. For this purpose, we propose to use evolving and dynamic vector representations of RDF graphs that represent the entities. These vector representations are continuously re-trained using small datasets corresponding to a sliding window of real-time news reports. Every time a new entity is identified, if it is an entity from a knowledge base, the vector is initialized with its corresponding value from an already pre-trained vector, otherwise, we generate a random vector. As soon as the dark entities are captured in a knowledge base, we map their previous learnt vector to the captured entities. Only those vectors representing the entities that appear in the current windows are trained in each iteration. To evaluate our solution, we plan to run an experiment with a dataset where all entities are known and take a subset of them as dark entities. Then, we can compare the resulting vectors against the same dataset but only using already pre-trained vectors by measuring how different are the results in both scenarios. The purpose of our research is to provide a real-time solution for journalists to follow current developments of events based on RDF graph representations that consider the dynamic and evolving context of news. By using RDF representations of news and tweets, we can (a) create language-independent representations; (b) enrich and provide news and tweets with background information from Linked Open Data knowledge bases one-time pre- analysis at ingestion; and (c) exploit the “structure” of events. With our proposed solution we aim to provide a tool for (a) dealing with dark entities and evolving contexts; and (b) helping journalists to follow the development of current events and what is being said in social media in real-time. Future work includes visualization techniques for journalists to explore events and background information, and an analysis of the performance of the vectors over time and its sensibility against fake news.
Presented by
Marc Gallofré Ocaña
Institution
University of Bergen
Keywords
Knowledge graph, open-event detection, dark entity, real-time

Monitoring melt by classifying Cryosat-2 waveforms

Martijn Vermeer, David Völgyes, Malcolm McMillan, Daniele Fantin

Vote for this poster
Abstract
As part of the ESA’s POLAR+ call, we are involved in the project earth observation for surface mass balance (EO4SMB). The Greenland and Antarctic ice sheets are major components within earth's climate system and are of key importance with respect to understanding climate change. The mass balance is the net mass exchange of various processes operating on the ice sheet, such as precipitation, sublimation, snow drift, meltwater runoff, ice discharge and basal melt. In addition to sea level rise these processes directly impact glacier dynamics, global ocean circulation and marine ecosystems. An important parameter for understanding melt dynamics and eventually ice sheet mass balance is the liquid water in the snowpack. It has been found that the presence of liquid water in the snowpack causes subtle changes in the Cryosat-2 altimeter waveform. Cryosat-2 is a radar altimeter with a 250m wide footprint that covers the entire arctic including the Greenland ice sheet at roughly monthly intervals. Therefore, regular melt information can potentially be derived by classifying Cryosat-2 waveforms. In addition, Cryosat-2 measurements are not impacted by clouds like spectroradiometers such as MODIS, which results in a discontinuous archive. The relationship between the raw Cryosat-2 waveform and the presence of liquid water is non-trivial due to the fact that many factors influence the waveform such as: density, water content, grain size and shape, layering etc. In this study we evaluate whether a Deep Learning approach for the classification of Cryosat-2 waveforms can be used to extract melt information. Training data is derived by spatio-temporally matching of Cryosat-2 measurements with MODIS imagery. The land surface temperature (LST) can be derived from MODIS measurements indicating whether liquid water can be expected in the snowpack or not. This gives a dataset with 1D radar waveform features and LST labels. We propose a 1D CNN for classifying the raw radar altimeter waveforms. Due to limited data the model architecture was designed to avoid overfit and in addition domain specific data augmentations were applied. In order to compensate for the significant class imbalance (~1:40), the classes are inversely weighted by their frequencies. Melt events over the Greenland ice sheet could be monitored on a higher spatial and temporal resolution than has been possible until now. Processing historical Cryosat-2 data would yield a complete, gapless melting event record from 2010 and onwards.
Presented by
Martijn Vermeer
Institution
Science and Technology AS
Keywords
deep learning, surface mass balance, melt dynamics, Greenland ice sheet, Cryosat-2, MODIS

Allocating Opportunities in a Dynamic Biased Mode

Meirav Segal

Vote for this poster
Abstract
The use of AI models in allocation problems has become increasingly prevalent. These problems are characterized by a decision maker (DM) allocating limited goods (e.g. loans) among a population in order to maximize some objective. Regretfully, modern society still consists of biases against some sub-populations which reflect in AI models. Moreover, recent publications have shown that imposing fairness constraints on such problems might not benefit the group we wish to protect. Different allocation policies could provide different incentives for increasing qualification, thus changing future allocations among groups. This feedback effect could help eliminate existing bias, or exacerbate it.

We present a model of a dynamic allocation process to simulate a scenario of college admission. Higher education is a key element towards many career paths. As such, access to higher education could be crucial for self fulfillment and/or financial security. Unfortunately, currently there are groups with lessened access to this opportunity due to societal biases.

We examine the dynamics of a decision process with two groups in the population, defined based on a protected attribute (such as race or gender). While the innate ability is uniformly distributed in both groups, one group is disadvantaged in the sense that there is societal bias against its members, which leads to reduced success probability. The admission decisions affect not only the DM's utility (sum of discounted rewards), but also future bias, which in turn affects the range of possible future utility. Since there is a relation between bias and utility, it is interesting to observe how the utility maximizing policy behaves under different model parameters.

We consider two possible feedback mechanisms. Admission policy can increase group qualification by setting a lower acceptance bar for the disadvantaged group (i.e. affirmative action). This act could potentially generate more role models and provide investment incentive, since there is a greater chance of being admitted. On the other hand, this very action might lead to a reversed effect, as lowering the bar means that less qualified members of this group are admitted. Hence, less admitted individuals from that group are likely to graduate. This might increase societal biases by reinforcing existing stereotypes and internalizing them by members of that group.

Our preliminary analysis for one type of bias shows that there are three regions of the bias state with distinct behaviour of the utility maximizing policy - granting none of the opportunities to the disadvantaged group, setting the same bar for both groups and applying affirmative action. These regions are also clearly seen in experimental results produced using policy iteration. In order to compare policies with respect to fairness, we define the notion Horizon Fairness which considers the impact of the policy on future bias. In the future, we will extend our analysis to the other kind of bias and the combination of both. In addition, we would like to provide some results for optimization under uncertainty of the feedback mechanism.
Presented by
Meirav Segal
Institution
University of Oslo
Keywords
Fairness,Bias,Long Term,Reinforcement Learning,Uncertainty
Chat with Presenter
Available November 2th - 10-12
Join the Discussion

AI-enabled proactive mHealth

Muhammad Sulaiman, Anne Håkansson, and Randi Karlsen

Vote for this poster
Abstract
Traditionally healthcare is based on a reactive approach which is to react when symptoms appear i.e., take action when the crisis occurs. It is convenient in most health-related concerns but depreciates self-management or self-empowerment to promote wellbeing. Proactive health conversely can predict and prevent a situation beforehand. It does not out-roll the reactive approach but complements it. Allowing supportive actions before a crisis enables care that empowers the user to promote wellbeing. Mobile health (mHealth) can be pivotal for proactive health. mHealth combines wearables to render health services on the fly. mHealth empowers the user by providing new insights on the health information gathered with wearables and mobile devices. With the collected data from users carrying these devices at all times, it is vital to understand the need for user-level decision-making. To achieve proactive health with the capability of prediction and prevention, AI can contribute by applying reasoning and negotiation to the available health data and recognizing patterns which can be used to automate processes and augment healthier practices. The objective presented here is to establish AI-enabled proactive mHealth to predict and prevent a situation promptly. With automated decision-making powered by predictive analytics to support users on accurate decisions instantly. A personalized system, that considers the uniqueness of a user. The main contribution of the research presented in this abstract is to establish a core framework of proactive mHealth It includes machine learning algorithms such as support vector machine (SVM), random forest (RF), decision trees (DT), regression models, artificial neural network (ANN) etc., to provide automated decision-making (ADM). ADM powered with predictive analytics can gather, process and model health information to render automated decisions. The framework also includes P5 design approach to mHealth that is predictive, preventive, participatory, personalized and psycho-cognitive. Moreover, Just-in-time adaptive interventions (JITAI) for system implementation. JITAI provide digital health interventions with a focus on timeliness. These all together develop the core framework of AI-enabled proactive mHealth. One main challenge is to establish AI-enabled proactive mHealth for individuals. A system must consider multiple attributes as input for providing proactiveness. These attributes provide information about the context of the individual like the surroundings along with profiling and characteristics. Information gathered from wearables can provide insights into the current state as well as daily patterns. Conclusively, research is to be conducted to establish proactive mHealth with prediction and prevention capabilities. Automated decision-making with predictive analytics will be the core part of such a system. This includes identifying data sources and applying precise AI techniques
Presented by
Muhammad Sulaiman <muhammad.sulaiman@uit.no>
Institution
Department of Computer Science, UiT The Arctic University of Norway
Keywords
mHealth, proactive health, Artificial Intelligence, digital interventions, automated decision-making.

Reinforcement Learning for Optimal a Hour-Ahead Electricity Trading with Battery Storage

Peyman Kor

Vote for this poster
Abstract
Presented by
Peyman Kor
Institution
Department of Energy Resources, University of Stavanger, Stavanger, Norway
Keywords
Reinforcement learning

I know where you are going! Transport mode detection and understanding feature importance based on smartphone sensors

Philippe Büdinger and Tor-Morten Grønli

Vote for this poster
Abstract
Presented by
Philippe Büdinger
Institution
Kristiania University College, Department of Technology
Keywords

NLC activity detection using Convolutional Neural Network

Rajendra Sapkota, Puneet Sharma, Ingrid Mann

Vote for this poster
Abstract
ABSTRACT. Optically thin layers of tiny ice particles near the summer mesopause known as noctilucent clouds are of significant interest within the aeronomy and climate science communities. In this paper, we investigate deep-learning-based image classifier for noctilucent cloud images observed from different locations and different weather conditions. In this paper, we use different convolutional neural network architectures to classify the noctilucent cloud images from the rest of the images. We compare the performance of custom deep learning architectures with that of fine-tuned state-of-the-art object detection models (SqueezeNet, ShuffleNet, and MobileNet). Furthermore, we investigate the most informative pixels in the input space of test images and visualize them in order to demonstrate the efficiency of convolutional nets for features extraction. Based on this, we identify the robust model for our classification task.
Presented by
Rajendra Sapkota <rajendra.sapkota@uit.no>
Institution
UiT, The Arctic Universiry of Norway
Keywords
Noctilucent cloud (NLC), deep learning, classification

Complexity and Predictability Analysis of the Elder Problem Using Big Data and Machine Learning

Roman Khotyachuk, Klaus Johannsen

Vote for this poster
Abstract
In this work, the d3f software, based in the general-purpose PDE simulation software 'ug', is used for numerically solving the problems from the Computational Fluid Dynamics. We have ported the d3f software to the Spark cluster. Such a modification allowed implementing the mass parallel runs of d3f software, efficient post-processing, and further analysis of vast amounts of data using Big Data tools and Machine Learning approaches. Specifically, our Spark-d3f setup is used for simulation and analysis of the Elder problem. For this problem, we achieved the following results.

1. Investigated the steady-state solutions of the Elder problem with regards to the Rayleigh numbers (Ra), grid sizes, perturbations, etc. 2. Analyzed the complexity of solutions regarding time, solution types, and other factors. 3. Created a tool for visual exploration of large solution sets from the Elder problem. 4. Developed predictive models for the Elder problem using different classification methods.

Our predictive models can be divided into the following types, depending on how the predictors (features) are designed: 1) fully informed models; 2) partially informed models; 3) “black box” models. The best of our models can predict a steady-state of the Elder problem (i.e., when time t > 50 years) with 95% accuracy at t=8-9 years.
Presented by
Roman Khotyachuk
Institution
NORCE Norwegian Research Centre AS
Keywords
The Elder problem, Complexity Analysis, Big Data, Machine Learning

Improved Analysis of the Tsallis-INF Algorithm in Stochastically Constrained Adversarial Bandits and Stochastic Bandits with Adversarial Corruptions

Saeed Masoudian, Yevgeny Seldin

Vote for this poster
Abstract
Multi-armed bandit (MAB) is widely studied in the stochastic setting or in the adversarial one. However, in recent years there has been an increasing interest in algorithms that perform well in both regimes and also intermediate regimes with no prior knowledge of the regime. The quest for best-of-both-worlds algorithms culminated with the work of Zimmert and Seldin (2021) that proposed the Tsallis-INF algorithm. We derive improved regret bounds for the Tsallis-INF algorithm in which in the adversarial regime with the (𝛥, 𝐶, 𝑇) self-bounding constraint the algorithm achieves $\mathcal{O}\left(\left(\sum_{i\neq i^*} \frac{1}{\Delta_i}\right)\log_+\left(\frac{(K-1)T}{\left(\sum_{i\neq i^*} \frac{1}{\Delta_i}\right)^2}\right)+\sqrt{C\left(\sum_{i\neq i^*}\frac{1}{\Delta_i}\right)\log_+\left(\frac{(K-1)T}{C\sum_{i\neq i^*}\frac{1}{\Delta_i}}\right)}\right)$ where 𝑇 is the time horizon, 𝐾 is the number of arms, $𝛥_i$ are the suboptimality gaps, $𝑖^∗$ is the best arm, 𝐶 is the corruption magnitude, and $\log_+(x) = \max\left(1,\log x\right)$. In particular for the special cases of the mentioned regime, our result indicates $\sqrt{log 𝑇 / log(𝑇/𝐶)}$ improvement in the stochastic bandits with adversarial corruptions and improvement of time horizon 𝑇 to a gap dependent time horizon $T(K-1)/(\sum_{i \neq i^*} 1/\Delta_i)^2$ in stochastically constrained adversarial bandits. Additionally, we provide a general analysis, which allows us to achieve the same kind of improvement for generalizations of Tsallis-INF to other settings beyond multiarmed bandits.
Presented by
Saeed Masoudian
Institution
University of Copenhagen, Department of Computer Science
Keywords
Learning Theory, Multiarmed Bandits

Predicting Grades of Brain Tumor from Histology Images Using Convolutional Neural Networks

Saruar Alam, Alexander S. Lundervold, and Arvid Lundervold

Vote for this poster
Abstract
Introduction

Gliomas are the most common type of primary brain tumor and are characterized by large morphological and genetic heterogeneity [HeteroGen], varying degree of tumor growth, speed of spread and tumor recurrence, poor prognosis, and high lethality. Gliomas are classified into low-grade glioma (LGG;grade I and II) and high-grade glioma (HGG;grade III and IV), and treatment for a glioma depends on its grade. Grading is performed by histological examination of (representative) glioma tissue according to several phenotypic characteristics that describe cell activity and tumor aggressiveness, usually requiring an experienced neuropathologist. The knowledge of tumor grade together with other factors such as age, clinical condition and tumor location, assists in treatment planning, and helps estimating prognosis and expected survivability. For instance, patients with a grade IV glioblastoma will have an average survival time of 12-18 months from the date of diagnosis. Automatic glioma grading, referred to as tumor grading, is desirable as it reduces inter and intra operator variability in determining tumor grade in histological images from a patient. Deep learning models used for image classification, including tumor grading, are often based on convolutional neural networks (CNN). This study employs two different pre-trained CNN models, EfficientNet and ResNet, to classify tumor grades.

Material and methods

We collected histology images with grades II, III, and IV from the TCGA-GBMLGG project, as used in [PathomicFus] and [HistoGeno] (https://hub.docker.com/r/cancerdatascience/scnn). In this setting, a histological image comprises a single region of interest (ROI) from whole slide imaging (WSI) scans of paraffin embedded sections from a glioma patient. Multiple ROIs can be extracted from the WSI slide, giving the possibility that several histology images are linked to a given patient. Our dataset contains 1458 histology images (resolution: 1024x1024) from altogether 736 patients divided into Grade II:181, Grade III:205,and Grade IV:350 subjects. We used an EfficientNet-B0 model for training, selected from the EfficientNet's family of B0-B7, as it has been shown to reduce training and inference time, with fewer learnable parameters, without compromising performance. EfficientNet contains several memory-efficient Inverted Residual Blocks (MBConv). We compared our trained model with Pathomic-fusion~[PathomicFus](https://github.com/mahmoodlab/PathomicFusion) and variants of ResNet models:ResNet-18 and ResNet-34 by using receiver-operating-characteristic-area-under-the-curve (ROC-AUC) and F1-Score, with fifteen cross-validations.

Results

Figure 3 (in the poster) shows tumor grading results from six subjects. The upper row is histology images with ground truth labels. The color-coded images in the lower row depict gradient-weighted class activation maps (Grad-CAM),i.e.spatial location of image information most important for the tumor grade prediction.

Discussion

We adopted pre-trained EfficientNet-B0 for tumor grading classification and compared the performance with Pathomic-fusion and two ResNet variants. The proposed model achieves higher accuracy in its tumor grade predictions (cf. Fig. 2 in the poster). In further work we will expand and evaluate our approach for obtaining visual explanations for the model's predictions and also incorporate estimates of model uncertainty. Incorporation of multiparametric brain MRI from the same subjects is also a challenge to address.

References

[HeteroGen] R. Chow et al. Am J Roentgenology 2018(doi:10.2214/AJR.17.18754) [PathomicFus] R.J. Chen et al. IEEE TMI 2020(doi:10.1109/TMI.2020.3021387) [HistoGeno] M.Pooya. et al. PNAS 2018;115(13):E2970-E2979(doi:10.1073/pnas.1717139115)
Presented by
Saruar Alam
Institution
University of Bergen, Department of Biomedicine
Keywords
Histology, Glioblastoma Grading, Convolutional neural networks, Grad-CAM
Chat with Presenter
Available November 3, 9am-10.30am
Join the Discussion

Evaluating Artificial Intelligence Explanations on Domain Experts

Steven Hicks, Cise Midoglu, Inga Strumke, Steffen Mæland, Andrea Storås, Malek Hammou, Pål Halvorsen, Vajira Thambawita, and Michael Riegler

Vote for this poster
Abstract
Artificial intelligence (AI) has become one of the most promising methods to analyze data in many domains such as medicine, sports, and autonomous driving. As data availability is growing exponentially, data-based AI models are becoming increasingly complex. Only a few years ago, machine learning methods such as support vector machines were considered state of the art, but deep neural networks have since produced more promising results and received much attention. However, despite the excellent performance, these methods also have significant drawbacks. Among the most significant issues is that the more complex the models get, the harder it is to understand their inner workings and thus results. This may be acceptable for non-critical problems like entertainment recommendations. For high-risk fields such as medicine, on the other hand, decisions can have an impact on people's lives, and the new EU proposal for AI regulation characterizes medical uses of AI as "high risk". In these fields, it is therefore important to provide explanations of AI models, regardless of whether they succeed or fail. Methods from the field of explainable AI (XAI) address this challenge, and for image explanation, widely used methods are grad-cam and SHapley Additive exPlanations (SHAP). However, a central and unanswered question is what constitutes a good explanation. The usefulness of an explanation depends on several factors, including how and in which context the explanation is given. Studies exploring how explanations are perceived on non-critical applications exist, but to the best of our knowledge, no study has investigated how domain experts perceive different XAI explanations for critical tasks, such as medical doctors. To address this challenge, we propose an open-source survey framework for performing studies targeted at domain experts, to better understand the usefulness of different explanations. We propose a framework that is easy to understand and follow, and consequently requires little time from the experts, which can often be a bottleneck. Furthermore, we support different types of survey questions like free-text, multiple-choice, and rankings. As an initial application, we use this framework to compare different explanations for the prediction from an AI model used for automatic polyp detection in colonoscopy. The goal is to get feedback from endoscopists about the different XAI explanations provided for a given prediction, but also measure the usability of the survey framework itself. This will improve the accessibility and deployment of similar studies for future use cases.
Presented by
Steven Hicks
Institution
Simula Metropolitan Center for Digital Engineering, Norwegian University of Science and Technology
Keywords
Explainable AI, Deep Learning, Surveys, Open-Source

Classification of deforestation alerts

Tord Kriznik Sørensen

Vote for this poster
Abstract
Rainforest deforestation is a major environmental challenge, causing loss of biodiversity, erosion and the release of stored carbon into the atmosphere contributing to climate change. Counteracting rainforest deforestation begins with monitoring individual events and finding the underlying primary drivers. The University of Maryland devised the Global Analysis and Discovery (GLAD) alerts for detecting changes in rainforests, providing both time and location information. These alerts are openly available at Global Forest Watch, but lack the classification of what initiated the change. The goal of this project is to classify the GLAD alerts into primary drivers using satellite imagery. Examples of drivers include road construction, mining and agricultural activities.

The suggested methodology is semantic segmentation, starting with a U-Net-like architecture. Multispectral images from the freely accessible Sentinel-2 satellites (S2) will be used. S2 provides 10m, 20m and 60m resolution optical images with around 7 days update frequency. The GLAD alerts, and their immediate surrounding area, will be manually labelled, building up an S2 – GLAD alert – primary driver dataset. This dataset will be created in an iterative fashion in collaboration with domain experts.

In addition to the imagery and the labels, the input data is also enriched with elevation models, which helps to differentiate classes. E.g. rivers always flow in gradient direction, while roads do not.

Finally, only a tiny fraction of the GLAD alerts are labeled, but a huge amount of them are available from previous years. Building on the unlabeled dataset, a semi-supervised training scheme can be implemented.

Many of the deforestation activities are illegal and often local governments don’t have the resources for real time monitoring. International NGOs and environmental monitoring programs can provide alerts to local authorities in a timely manner. Identifying the primary drivers is an important step to determine the seriousness of the alerts and prioritize which to act upon.

The project lasts from October 2021 to June 2022.
Presented by
Tord Kriznik Sørensen, David Völgyes
Institution
Science and Technology AS
Keywords
semantic segmentation, satellite imagery, sentinel-2, remote sensing, earth observation, GLAD, RADD, deforestation

Modeling Risky Choices in Unknown Environments

Ville Tanskanen, Chang Rajani, Homayun Afrabandpey, Aini Putkonen, Aurélien Nioche, Arto Klami

Vote for this poster
Abstract
Decision-theoretic models explain human behavior in choice problems involving uncertainty, in terms of individual tendencies such as risk aversion. However, many classical models of risk require knowing the distribution of possible outcomes (rewards) for all options, limiting their applicability outside of controlled experiments. We study the task of learning such models in contexts where the modeler does not know the distributions but instead can only observe the choices and their outcomes for a user familiar with the decision problems, for example a skilled player playing a digital game. We propose a framework combining two separate components, one for modeling the unknown decision-making environment and another for the risk behavior. By using environment models capable of learning distributions we are able to infer classical models of decision-making under risk from observations of the user's choices and outcomes alone, and we also demonstrate alternative models for predictive purposes. We validate the approach on artificial data and demonstrate a practical use case in modeling risk attitudes of professional esports teams.
Presented by
Ville Tanskanen
Institution
University of Helsinki, Nokia, Aalto University
Keywords

Chebyshev-Cantelli PAC-Bayes-Bennett Inequality for theWeighted Majority Vote

Yi-Shan Wu, Andrés R. Masegosa, Stephan S. Lorenzen, Christian Igel, Yevgeny Seldin

Vote for this poster
Abstract
We present a new second-order oracle bound for the expected risk of a weighted majority vote. The bound is based on a novel parametric form of the Chebyshev-Cantelli inequality (a.k.a. one-sided Chebyshev’s), which is amenable to efficient minimization. The new form resolves the optimization challenge faced by prior oracle bounds based on the Chebyshev-Cantelli inequality, the C-bounds [Germain et al., 2015], and, at the same time, it improves on the oracle bound based on second order Markov’s inequality introduced by Masegosa et al. [2020]. We also derive a new concentration of measure inequality, which we name PAC-Bayes-Bennett, since it combines PAC-Bayesian bounding with Bennett’s inequality. We use it for empirical estimation of the oracle bound. The PAC-Bayes-Bennett inequality improves on the PAC-Bayes-Bernstein inequality of Seldin et al. [2012]. We provide an empirical evaluation demonstrating that the new bounds can improve on the work of Masegosa et al. [2020]. Both the parametric form of the Chebyshev-Cantelli inequality and the PAC-Bayes-Bennett inequality may be of independent interest for the study of concentration of measure in other domains.
Presented by
Yi Shan Wu
Institution
University of Copenhagen, Department of Computer Science
Keywords
Learning Theory, Machine Learning, PAC-Bayes, Weighted Majority Vote

A Dynamic and Self-Optimizing Approach for Job-Shop Floor Path Finding and Collision Avoidance in Industry 4.0

Yigit Can Dundar

Vote for this poster
Abstract
Within Industry 4.0, job-shop floor logistics require efficient and flexible solutions to deal with ever increasing demands from customers. In order to meet the industry demands, automated guided vehicles are relied upon to handle job-shop floor logistics tasks. The vehicles can deliver products or materials from one place to another using path finding to avoid collisions following optimal routes. The current path finding implementations of automated guided vehicle based systems rely on external guidance material placed on the floor which the vehicles need to follow in order to reach their destinations. The current approach lacks flexibility where the vehicles have little room to improvise and adapt to unexpected changes in the working environment. Also, the reliance on guidance material can increase scaling costs related to job-shop floor expansions. As an alternative, this research project provides a dynamic path finding method which does not rely on the usage of external guidance material and is designed to adapt to changes in the environment both expected and unexpected. The vehicles are equipped with distance measuring sensors to detect and avoid both static and dynamic obstacles during run-time. A job-shop floor simulation has been implemented where multiple vehicles work simultaneously to move products from one place to another while avoiding collisions with each other and their environment which is prone to unexpected changes during run-time. Functionality and scaling measurement tests were conducted to evaluate the approach. Current findings indicate that the dynamic path finding method is capable of allowing multiple agents to autonomously find their own path while avoiding collisions with each other and the environment. The method also scales properly with time and number of vehicles in the environment. As future work, the collision avoidance of the dynamic method will be coupled with reinforcement learning to allow the agents to learn from their experiences to possibly better adapt to changes in the environment, and self-optimization will be deployed in order to potentially improve path finding efficiency through smarter self routing.
Presented by
Yigit Can Dundar
Institution
UiT - The Arctic University of Norway
Keywords
Automation, Collision avoidance, Industry 4.0, Multi-agent systems, Path finding

Modeling Piano Nonlinearities Using Deep Learning

Riccardo Simionato

Vote for this poster
Abstract
Piano modeling represents a topic of great interest in musical acoustics and digital sound synthesis. The piano has an important role in Western music due to its complexity and versatility. For these reasons, digital piano substitute capable of sounding as close as possible possible to it have always been demanded. The piano, as well as other acoustic and analog electronic instruments, presents nonlinearities that provide unique timbral characteristics that cannot be easily reproduced by their digital counterparts. The mechanical system of this modeling process is a nonlinear coupled system with reciprocal interactions. The time discretization must be performed by ensuring numerical stability and computational efficiency, which is not straightforward in this context. High computational time solving the problem is needed and all different aspect of this complex system are not fully understood. This results in a lack to reproduce all the tonal aspects of the real instrument. Similar problems can be found on modeling of nonlinear analog audio effects where modeling nonlinearities also presents significant challenges. Hence, the aim of this work is to enhance the current piano emulations state-of-the-art by using artificial intelligence approaches, conceiving innovative automatic tools to facilitate the modeling of nonlinearities present in the piano. In recent years, more machine learning-based approaches have been proposed for nonlinear circuit modeling. These algorithms have been shown capable to model nonlinear aspects that have not been easily modeled by other not ML algorithms. A great effort has been performed using Re- current and Convolutional Neural Networks (RNN and CNN). CNN-like architectures have been explored to model nonlinear effects with short-term memory, such as distortion, overdrive and amplifier. Autoregressive waveform models have been shown to be able to model local dependencies in the temporal sequences, but suffering of model the long-term ones. On the other hand, RNNs have been resulted capturing temporal dependencies over very long-time spans. Furthermore, RNNs have been shown to be able to achieving the accuracy of CNNs, in the case of distortion effects, with less processing power to run. Besides the recent results on the audio field, other insight could be found in newer ML technique, such as the Transformer Networks. Transformers have been shown capable to model global dependencies between input and output. They are based on attention mechanisms without recurrence and convolutions, meaning in less computational requirement. Finally, the great majority of recent works underlies the suitability of these algorithms on modeling nonlinear audio. Recent results suggest that deep learning approaches could be used to model nonlinear audio behaviors. In addition, the results suggests that solutions meeting real-time constraints could be found. This work aims to use machine learning approaches to train a physically-based model of piano and recreate a physical piano in the digital domain through the use of deep learning techniques.
Presented by
Riccardo Simionato
Institution
University of Oslo, Dept. of Musicology
Keywords
Deep learning, Pianoforte, Music, audio modeling, audio synthesis