Affinity Posters
Tiny Papers Poster Session 1
Krystal Maughan · Thomas F Burns
Halle B
Live content is unavailable. Log in and register to view live content
Schedule
Tue 1:45 a.m. - 3:45 a.m.
|
Explorations in Texture Learning
(
Poster
#311
)
>
link
Poster Location: Halle B #311 In this work, we investigate texture learning: the identification of textures learned by object classification models, and the extent to which they rely on these textures. We build texture-object associations that uncover new insights about the relationships between texture and object classes in CNNs and find three classes of results: associations that are strong and expected, strong and not expected, and expected but not present. Our analysis demonstrates that investigations in texture learning enable new methods for interpretability and have the potential to uncover unexpected biases. |
Blaine Hoak · Patrick McDaniel 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
Design of a molecular exchange-based robust perceptron for biomolecular neural network
(
Poster
#310
)
>
link
Poster Location: Halle B #310 A molecular perceptron is of immense interest due to its computing and classification ability in biophysical and aqueous environments. Because such a perceptron relies on biochemical interactions, it must adapt to perturbations and be resilient against stochastic fluctuations to maintain faithful \emph{in vivo} classification. In this paper, we design a molecular exchange mechanism (MEM)-based perceptron following a set of evolutionarily preserved \emph{in vivo} signaling steps, including negative feedback known for noise regulation. The efficacy study of the MEM-perceptron demonstrates an improved adaptation against perturbations and noise. |
Moshiur Rahman · Muhtasim Ishmum Khan · Md. Shahriar Karim 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
Tracing Footprints: Neural Networks Meet Non-integer Order Differential Equations For Modelling Systems with Memory
(
Poster
#309
)
>
link
Poster Location: Halle B #309 Neural Ordinary Differential Equations (Neural ODEs) have gained popularity for modelling real-world systems, thanks to their ability to fit ODEs to data. However, numerous systems in science and engineering often exhibit intricate memory behaviours, being classical ODEs inadequate for such tasks due to their inability to handle strong and complex memory effects. In this work, we introduce the Neural Fractional Differential Equation (Neural FDE), a Neural Network (NN) architecture to fit a FDE to data. With this we leverage the capabilities of FDEs allowing the architecture to take into account all past states and their influence on a system's current and future behaviours. Neural FDE inherently exhibits memory, providing a more accurate representation of complex phenomena in systems with long-term dependencies. Numerical experiments show Neural FDE generalises better and has faster convergence than Neural ODEs. |
C. Coelho · M.Fernanda Costa · Luís Ferrás 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
Bad Minima of Predictive Coding Energy Functions
(
Poster
#308
)
>
link
Poster Location: Halle B #308 We investigate Predictive Coding Networks (PCNs) by analyzing their performance under different choices of activation functions. We link existing theoretical work on the convergence of simple PCNs to a concrete, toy example of a network - simple enough to explicitly discuss the fixed points in its training stage. We show that using activation functions that are popular in mainstream machine learning, such as the ReLU, does not guarantee the minimization of the empirical risk during training. We show non-convergence on an illustrative toy example and significant accuracy loss in classification tasks on common datasets when using ReLU compared to other activation functions. |
Simon Frieder · Luca Pinchetti · Thomas Lukasiewicz 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
Generating Counterfactual Explanations Using Cardinality Constraints.
(
Poster
#307
)
>
link
Poster Location: Halle B #307 Providing explanations about how machine learning algorithms work and/or make particular predictions is one of the main tools that can be used to improve their trusworthiness, fairness and robustness. One of the most intuitive type of explanations are counterfactuals, which are examples that differ from a given point only in the prediction target and some set of features, presenting which features need to be changed in the original example to flip the prediction for that example. However, such counterfactuals can have many different features than the original example, making their interpretation difficult. In this paper, we propose to explicitly add a cardinality constraint to counterfactual generation limiting how many features can be different from the original example, thus providing more interpretable and easily understantable counterfactuals. |
Ruben Ruiz-Torrubiano 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
REGION MIXUP
(
Poster
#306
)
>
link
Poster Location: Halle B #306 This paper introduces a simple extension of mixup data augmentation to enhance generalization in visual recognition tasks. Unlike the vanilla mixup method, which blends entire images, our approach focuses on combining regions from multiple images. |
Saptarshi Saha · Utpal Garain 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
Adaptive Brain Network Augmentation based on Group-aware Graph Learning
(
Poster
#305
)
>
link
Poster Location: Halle B #305 Brain network analysis significantly improves artificial intelligence techniques in the realm of digital health. Most existing methods uniformly construct brain networks for different groups (e.g., male and female groups, healthy and sick people groups), facing the interference of group-irrelevant noises and failing to capture group-specific features to enhance brain networks. To address this issue, this paper proposes an adaptive brain network augmentation method based on group-aware graph learning. We construct group-aware brain networks, which can adapt to distinct groups, reducing the interference of noises, and improving model robustness across various tasks and subject groups. |
Ciyuan Peng · Mujie Liu · Chenxuan Meng · Shuo Yu · Feng Xia 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
Using spiking neural networks to assist fine art and philology study:\\to classify styles of Chinese calligraphy with minimal computing power
(
Poster
#304
)
>
link
Poster Location: Halle B #304 Spiking Neural Networks have drawn much attention for their potential deployment in low computing power scenarios and interdisciplinary research. This paper focuses on a novel task of classifying Chinese Calligraphy styles properly and introduces a cutting-edge network called CaStySNN. Compared to same-structured traditional artificial neural networks, CaStySNN requires significantly less computing power, while demonstrating superior performances across different datasets. In the future, this innovative approach can be applied to neuromorphic devices, offering solutions to a wide range of challenges in the realms of fine arts and philology. |
Zheng Luan · Xiangqi Kong · Shuimu Zeng · 姚羽珂 · Yaxuan Zhang · Xuerui Qiu 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
An Evaluation Benchmark for Autoformalization in Lean4
(
Poster
#303
)
>
link
Poster Location: #303 In the advancing field of computational mathematics, Large Language Models (LLMs) hold the potential to revolutionize autoformalization, a process crucial across various disciplines. The introduction of Lean4, a mathematical programming language, presents an unprecedented opportunity to rigorously assess the autoformalization capabilities of LLMs. This paper introduces a novel evaluation benchmark designed for Lean4, applying it to test the abilities of state-of-the-art LLMs, including GPT-3.5, GPT-4, and Gemini Pro. Our comprehensive analysis reveals that, despite recent advancements, these LLMs still exhibit limitations in autoformalization, particularly in more complex areas of mathematics. These findings underscore the need for further development in LLMs to fully harness their potential in scientific research and development. This study not only benchmarks current LLM capabilities but also sets the stage for future enhancements in the field of autoformalization. |
Jasdeep Sidhu · Shubhra Mishra 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
GNN-VPA: A Variance-Preserving Aggregation Strategy for Graph Neural Networks
(
poster
)
>
link
Poster Location: #302 The successful graph neural networks (GNNs) and particularly message passing neural networks critically depend on the functions employed for message aggregation and graph-level readout. Using signal propagation theory, we propose a variance-preserving aggregation function, which maintains the expressivity of GNNs while improving learning dynamics. Our results could pave the way towards normalizer-free or self-normalizing GNNs. |
Lisa Schneckenreiter 🔗 |
Tue 1:45 a.m. - 3:45 a.m.
|
Is Watermarking LLM-Generated Code Robust?
(
poster
)
>
link
Poster Location: #301 We present the first study of the robustness of existing watermarking techniques on Python code generated by large language models. Although existing works showed that watermarking can be robust for natural language text, we show that it is easy to remove these watermarks on code by simple semantic-preserving transformations. |
Tarun Suresh 🔗 |