🍮 FLAM: Frame-Wise Language-Audio Modeling

Recent multi-modal audio-language models (ALMs) excel at text-audio retrieval but struggle with frame-wise audio understanding. Prior works use temporal-aware labels or unsupervised training to improve frame-wise capabilities, but they still lack fine-grained labeling capability to pinpoint when an event occurs. While traditional sound event detection models can precisely localize events, they are limited to pre-defined categories, making them ineffective for real-world scenarios with out-of-distribution events. In this work, we introduce FLAM, an open-vocabulary contrastive audio-language model capable of localizing specific sound events. FLAM employs a memory-efficient and calibrated frame-wise objective with logit adjustment to address spurious correlations, such as event dependencies and label imbalances during training. To enable frame-wise supervision, we leverage a large-scale dataset with diverse audio events, LLM-generated captions and simulation. Experimental results and case studies demonstrate that FLAM significantly improves the open-vocabulary localization capability while maintaining strong performance in global retrieval and downstream tasks.

On this page, we showcase the audio samples and sound event detection results from the FLAM experiments, as discussed in the paper, as well as FLAM detection results on real-world audio examples that are not seen during training.

Click on the image to enlarge it. Please allow a few seconds to load the video and audio.

For more detailed information about our ASFX-SED dataset and its evaluation, please visit our ASFX-SED Dataset page.

FLAM Zero-Shot Detection Results on Real-world Examples

We present FLAM's zero-shot, open vocabulary sound event detection on real-world audio examples that are not seen during training. We also include detection results of events that are not present in the examples in red. The title of each video links to the corresponding YouTube video.

Sound Event Detection Results

ASFX-SED Dataset (zero-shot, open-vocabulary)

example 1

example 2

example 3

example 4

Synthetic Held-out Dataset (zero-shot, open-vocabulary)

example 1

example 2

example 3

example 4

Audioset-Strong Dataset

example 1

example 2

example 3

example 4

FLAM

(Left) Traditional ALMs derive global audio and text embeddings, treating each ground-truth audio--text pair as a positive sample and all other pairs in the batch as negatives. (Right) FLAM instead processes frame-level audio features and trains on temporally labeled sound events paired with text descriptions. FLAM enables finer-grained localization of events based on text queries, yielding accurate open-vocabulary sound event detection.