[Paper] Stochastic Parameter Decomposition — LessWrong
AI

[Paper] Stochastic Parameter Decomposition — LessWrong

A key step in reverse engineering neural networks is to decompose them into simpler parts that can be studied in relative isolation.  Linear parameter decomposition— a framework that has been proposed to resolve several issues with current decomposition methods—decomposes neural network parameters into a sum of sparsely used vectors in parameter space.  However, the current […]

[Paper] Stochastic Parameter Decomposition — LessWrong
AI

No, Futarchy Doesn’t Have an EDT Flaw — LessWrong

(A response to this post.) If you use prediction markets to make decisions, you might think they’ll generate EDT decisions: you’re asking for P(A|B), where you care about A, and B is something like “a decision … is taken”. Okay, so say you want to use prediction markets to generate CDT decisions. You want to

[2409.09510] Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models
AI

[2409.09510] Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models

[Submitted on 14 Sep 2024 (v1), last revised 26 Jun 2025 (this version, v2)] View a PDF of the paper titled Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models, by Alireza Salemi and 1 other authors View PDF HTML (experimental) Abstract:Despite its substantial impact on various search, recommendation, and question answering

[2409.09510] Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models
AI

A Benchmark Dataset for Audio Deepfake Detection

[Submitted on 23 Jun 2025 (v1), last revised 26 Jun 2025 (this version, v2)] View a PDF of the paper titled IndieFake Dataset: A Benchmark Dataset for Audio Deepfake Detection, by Abhay Kumar and 2 other authors View PDF HTML (experimental) Abstract:Advancements in audio deepfake technology offers benefits like AI assistants, better accessibility for speech

[2409.09510] Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models
AI

Real-time and personalized product recommendations for large e-commerce platforms

arXiv:2506.21368v1 Announce Type: cross Abstract: We present a methodology to provide real-time and personalized product recommendations for large e-commerce platforms, specifically focusing on fashion retail. Our approach aims to achieve accurate and scalable recommendations with minimal response times, ensuring user satisfaction, leveraging Graph Neural Networks and parsimonious learning methodologies. Extensive experimentation with datasets from one

[2409.09510] Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models
AI

Small Encoders Can Rival Large Decoders in Detecting Groundedness

arXiv:2506.21288v1 Announce Type: cross Abstract: Augmenting large language models (LLMs) with external context significantly improves their performance in natural language processing (NLP) tasks. However, LLMs struggle to answer queries reliably when the provided context lacks information, often resorting to ungrounded speculation or internal knowledge. Groundedness – generating responses strictly supported by the context – is

[2409.09510] Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models
AI

From Understanding to Omni-Modal Reasoning with Context

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

[2409.09510] Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for Privacy-Preserving Personalization of Large Language Models
AI

[2504.05312] Towards Adaptive Memory-Based Optimization for Enhanced Retrieval-Augmented Generation

[Submitted on 19 Feb 2025 (v1), last revised 26 Jun 2025 (this version, v2)] View a PDF of the paper titled Towards Adaptive Memory-Based Optimization for Enhanced Retrieval-Augmented Generation, by Qitao Qin and 4 other authors View PDF HTML (experimental) Abstract:Retrieval-Augmented Generation (RAG), by integrating non-parametric knowledge from external knowledge bases into models, has emerged

Scroll to Top