Portrait of Max Ploner

Max Ploner

PhD student at Humboldt Universität zu Berlin (machine learning group) and member of Science of Intelligence (SCIoI).

I'm interested in continual & transfer learning, knowledge probing, and growing neural networks. If you have questions, interesting papers to share, or have anything else on your mind, don't hesitate to reach out!

Projects

Accepted at NAACL 2024

BEAR & LM Pub Quiz

Dataset & Evaluation library

Illustration of how LM Pub Quiz evaluates LMs.

Knowledge probing assesses to which degree a language model (LM) has successfully learned relational knowledge during pre-training. Probing is an inexpensive way to compare LMs of different sizes and training configurations. However, previous approaches rely on the objective function used in pre-training LMs and are thus applicable only to masked or causal LMs. As a result, comparing different types of LMs becomes impossible. To address this, we propose an approach that uses an LM's inherent ability to estimate the log-likelihood of any given textual statement. We carefully design an evaluation dataset of 7,731 instances (40,916 in a larger variant) from which we produce alternative statements for each relational fact, one of which is correct. We then evaluate whether an LM correctly assigns the highest log-likelihood to the correct statement. Our experimental evaluation of 22 common LMs shows that our proposed framework, BEAR, can effectively probe for knowledge across different LM types. We release the BEAR datasets and an open-source framework that implements the probing approach to the research community to facilitate the evaluation and development of LMs.

Accepted at EACL 2024

Parameter-Efficient Fine-Tuning: Is There An Optimal Subset of Parameters to Tune?

Figure 1 from the paper

The ever-growing size of pretrained language models (PLM) presents a significant challenge for efficiently fine-tuning and deploying these models for diverse sets of tasks within memory-constrained environments.In light of this, recent research has illuminated the possibility of selectively updating only a small subset of a model’s parameters during the fine-tuning process.Since no new parameters or modules are added, these methods retain the inference speed of the original model and come at no additional computational cost. However, an open question pertains to which subset of parameters should best be tuned to maximize task performance and generalizability. To investigate, this paper presents comprehensive experiments covering a large spectrum of subset selection strategies. We comparatively evaluate their impact on model performance as well as the resulting model’s capability to generalize to different tasks.Surprisingly, we find that the gains achieved in performance by elaborate selection strategies are, at best, marginal when compared to the outcomes obtained by tuning a random selection of parameter subsets. Our experiments also indicate that selection-based tuning impairs generalizability to new tasks.

City Places Scraper

Collect metadata on places (buisnesses, bars, cafés, etc.) in a specified area

The scrapers get places (buisnesses, bars, cafés, etc.) from OSM and collects additional metadata including a time-specific popularity index. The scraper was originally written at powerplace.io for internal usage and later made public.