Imagine to Hear: Auditory Knowledge Generation can be an Effective Assistant for Language Models

Anonymous ACL submission

Case Study

We present three case studies demonstrating the effectiveness of our proposed approach:

1) Imagine to Hear (ITH) Case Study illustrates how generative auditory knowledge enhances language models in sound-related tasks, enabling more accurate auditory reasoning.

2) Dynamic Knowledge Injection (DKI) Case Study highlights the impact of targeted audio span injection, showing how selective auditory knowledge improves model predictions.

3) Fusion Gate (FG) Case Study visualizes the influence of auditory knowledge integration by examining token-wise fusion gate weights, where higher values indicate stronger audio knowledge incorporation.

Abstract

Language models pretrained on text-only corpora often struggle with tasks that require auditory commonsense knowledge. Previous work addresses this problem by augmenting the language model to retrieve knowledge from external audio databases. This approach has several limitations, such as the potential lack of relevant audio in databases and the high costs associated with constructing and querying the databases. To address these issues, we propose Imagine to Hear, a novel approach that dynamically generates auditory knowledge using generative models. Our framework detects multiple audio-related textual spans from the given prompt and generates corresponding auditory knowledge. We develop several mechanisms to efficiently process multiple auditory knowledge, including a CLAP-based rejection sampler and a language-audio fusion module. Our experiments show that our method achieves state-of-the-art performance on AuditoryBench without relying on external databases, highlighting the effectiveness of our generation-based approach.

Model Architecture

Model Architecture

An illustration of the overall framework of the proposed Imagine to Hear (ITH), consisting of three components:

1) An imagination module, which detects multiple audio-related spans from the given prompt and generates multiple corresponding audio knowledge.

2) A fusion module, which combines the (variable-length) auditory and textual information.

3) A language encoder, which processes the output of the fusion module.

BibTeX

TBD