<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>RAG | UCSC OSPO</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/tag/rag/</link><atom:link href="https://deploy-preview-1007--ucsc-ospo.netlify.app/tag/rag/index.xml" rel="self" type="application/rss+xml"/><description>RAG</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Fri, 05 Sep 2025 00:00:00 +0000</lastBuildDate><item><title>Final Report: A Systematic Investigation into the Reproducibility of RAG Systems</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/pnnl/llm_rag_reproducibility/20250905-wbq321/</link><pubDate>Fri, 05 Sep 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/pnnl/llm_rag_reproducibility/20250905-wbq321/</guid><description>&lt;p>I&amp;rsquo;m Baiqiang, and this is the final report for the &lt;a href="https://ucsc-ospo.github.io/project/osre25/pnnl/llm_rag_reproducibility/" target="_blank" rel="noopener">Enhancing Reproducibility in RAG Frameworks for Scientific Workflows&lt;/a> project, mentored by Luanzheng &amp;ldquo;Lenny&amp;rdquo; Guo and Dongfang Zhao. This project successfully developed a novel framework to quantitatively measure reproducibility in AI systems, yielding several surprising and impactful results.&lt;/p>
&lt;h3 id="the-challenge-the-need-for-systematic-measurement">The Challenge: The Need for Systematic Measurement&lt;/h3>
&lt;p>Retrieval-Augmented Generation (RAG) is a cornerstone of AI for science, but its reliability is often compromised by non-determinism. While this issue was a known concern, a fundamental challenge was the lack of standardized tools and methodologies to systematically measure and quantify the sources of this inconsistency. Without a rigorous way to analyze the problem, it was difficult to move beyond ad-hoc tests and establish the true root causes, hindering the development of truly trustworthy AI systems for science.&lt;/p>
&lt;h3 id="our-contribution-the-reprorag-framework">Our Contribution: The ReproRAG Framework&lt;/h3>
&lt;p>To address this gap, the central contribution of this project is &lt;strong>ReproRAG&lt;/strong>, a comprehensive, open-source benchmarking framework. ReproRAG is designed to systematically investigate sources of uncertainty across the entire RAG pipeline by:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Isolating Variables:&lt;/strong> It allows for controlled experiments on embedding models, numerical precision, retrieval algorithms, hardware configurations (CPU/GPU), and distributed execution environments.&lt;/li>
&lt;li>&lt;strong>Quantifying Uncertainty:&lt;/strong> It employs a suite of metrics—including Exact Match Rate, Jaccard Similarity, and Kendall&amp;rsquo;s Tau—to precisely measure the impact of each variable on the final retrieved results.&lt;/li>
&lt;/ul>
&lt;h3 id="key-findings-a-new-hierarchy-of-uncertainty">Key Findings: A New Hierarchy of Uncertainty&lt;/h3>
&lt;p>Our large-scale empirical study using ReproRAG challenged common assumptions and established a clear hierarchy of what actually impacts reproducibility.&lt;/p>
&lt;ol>
&lt;li>
&lt;p>&lt;strong>Core Algorithms Are Not the Problem:&lt;/strong> Our most surprising finding is that modern retrieval libraries like FAISS are perfectly reproducible out-of-the-box. Across all tested index types (including approximate ones like HNSW and IVF) and execution environments (single-node CPU/GPU and multi-node distributed systems), we achieved perfect run-to-run reproducibility (1.000 scores on all metrics) when environmental factors like random seeds were controlled. This falsifies the common hypothesis that approximate nearest neighbor algorithms are a primary source of randomness.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Embedding Model Choice is a Dominant Source of Variation:&lt;/strong> We found that the choice of the embedding model is a dominant factor driving result variation. When comparing outputs from different state-of-the-art models (BGE, E5, Qwen) for the same query, the agreement was very low (e.g., Overlap Coefficient of ~0.43-0.54). This means a scientific conclusion drawn with one model may not be reproducible with another, as they are fundamentally &amp;ldquo;seeing&amp;rdquo; different evidence.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Environmental Factors Introduce Measurable &amp;ldquo;Drift&amp;rdquo;:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Numerical Precision:&lt;/strong> Changing floating-point precision (e.g., FP32 vs. FP16) was a guaranteed source of variation, but it caused a small and quantifiable &amp;ldquo;embedding drift&amp;rdquo; rather than chaotic changes.&lt;/li>
&lt;li>&lt;strong>Data Insertion:&lt;/strong> Incrementally adding new data to an index caused a predictable &amp;ldquo;displacement&amp;rdquo; of old results, not a re-shuffling. The relative ranking of the remaining original documents was perfectly stable (Kendall&amp;rsquo;s Tau of 1.000).&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Common Determinism Flags Can Be Ineffective:&lt;/strong> Our tests showed that popular software-level controls, like &lt;code>cudnn.deterministic&lt;/code> flags in PyTorch, had no observable effect on the output of modern transformer-based embedding models. This underscores the necessity of empirical validation over assuming that framework settings work as advertised.&lt;/p>
&lt;/li>
&lt;/ol>
&lt;h3 id="conclusion">Conclusion&lt;/h3>
&lt;p>This project successfully shifted the focus of the RAG reproducibility problem. The key challenge is not to fix supposedly &amp;ldquo;random&amp;rdquo; algorithms, but to rigorously control the entire experimental environment. We delivered &lt;strong>ReproRAG&lt;/strong>, a framework that empowers researchers to do just that. Our findings provide actionable insights for the community: efforts to improve reproducibility should focus less on the retrieval algorithms themselves and more on disciplined management of embedding models, data versioning, and numerical precision.&lt;/p></description></item><item><title>Mid-Term Report: Uncovering the True Sources of Non-Reproducibility in AI for Science</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/pnnl/llm_rag_reproducibility/20250725-wbq321/</link><pubDate>Fri, 01 Aug 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/pnnl/llm_rag_reproducibility/20250725-wbq321/</guid><description>&lt;p>Hello, I&amp;rsquo;m Baiqiang. I’m excited to share a mid-term update from the &lt;a href="https://ucsc-ospo.github.io/project/osre25/pnnl/llm_rag_reproducibility/" target="_blank" rel="noopener">Enhancing Reproducibility in RAG Frameworks for Scientific Workflows&lt;/a> project. This journey, mentored by Luanzheng &amp;ldquo;Lenny&amp;rdquo; Guo and Dongfang Zhao, has taken a fascinating and unexpected turn, leading to a much deeper understanding of what it takes to build truly reliable AI for science.&lt;/p>
&lt;h3 id="the-search-for-an-invisible-bug">The Search for an Invisible Bug&lt;/h3>
&lt;p>As a quick recap, our project tackles the critical problem of &lt;strong>non-determinism&lt;/strong> in Retrieval-Augmented Generation (RAG) systems. For science to be trustworthy, it must be repeatable. If an AI system gives different answers to the same question, it fails this fundamental test. Our initial goal, outlined in my &lt;a href="https://www.overleaf.com/read/fcbxtpngdnhw#8cc2c8" target="_blank" rel="noopener">proposal&lt;/a>, was to find and fix the sources of this inconsistency, which we believed lay within the retrieval algorithms themselves.&lt;/p>
&lt;p>To do this, we built a comprehensive testing framework capable of running thousands of controlled experiments. We designed it to meticulously measure the consistency of retrieval results while varying everything from the indexing algorithm to the underlying hardware.&lt;/p>
&lt;h3 id="a-surprising-discovery-the-usual-suspect-is-innocent">A Surprising Discovery: The Usual Suspect is Innocent&lt;/h3>
&lt;p>The common wisdom in the community is that high-performance, approximate search libraries like FAISS are a major source of randomness. We put this to the test, running repeated queries against various index types, including complex ones like &lt;code>HNSW&lt;/code> and &lt;code>IndexIVF&lt;/code>.&lt;/p>
&lt;p>Our results were clear and surprising: &lt;strong>FAISS is remarkably reproducible out of the box.&lt;/strong> When run on a consistent hardware and software stack, it returns the exact same results, every single time. The library appears to have robust internal seed management that ensures deterministic behavior.&lt;/p>
&lt;p>This finding was a pivotal moment. The non-reproducibility that researchers observe in practice is real, but it doesn&amp;rsquo;t come from where we expected. The problem isn&amp;rsquo;t the algorithm itself, but the environment it runs in. Our investigation immediately shifted to find the real culprits.&lt;/p>
&lt;h3 id="pinpointing-the-true-sources-of-non-determinism">Pinpointing the True Sources of Non-Determinism&lt;/h3>
&lt;p>Our framework quickly helped us identify the true sources of inconsistency:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Hardware-Induced Variation (CPU vs. GPU):&lt;/strong> This is the most significant factor. Running the exact same retrieval code can produce different document rankings and even different document sets when executed on a CPU versus a GPU. This is likely due to subtle differences in floating-point arithmetic and library optimizations in the hardware stack.&lt;/li>
&lt;li>&lt;strong>The Impact of Numerical Precision:&lt;/strong> We also confirmed that changing the floating-point precision of the data (e.g., from FP32 to FP16) can introduce small numerical variations that are just large enough to reorder the results, potentially changing the evidence the LLM receives.&lt;/li>
&lt;/ol>
&lt;h3 id="our-mission-refined-building-tools-for-environmental-control">Our Mission Refined: Building Tools for Environmental Control&lt;/h3>
&lt;p>This discovery has sharpened our project&amp;rsquo;s mission. The challenge is not to &amp;ldquo;fix&amp;rdquo; a supposedly random algorithm, but to develop the tools and best practices to control for the entire experimental environment. Our focus for the second half of the project is to:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Develop a Hardware-Aware Configuration Tracker:&lt;/strong> We are building a tool that goes beyond logging software versions. It will capture the critical details of the hardware environment—CPU/GPU model, CUDA version, etc.—and link them directly to an experiment&amp;rsquo;s results.&lt;/li>
&lt;li>&lt;strong>Create a Cross-Environment Validation Suite:&lt;/strong> Our open-source benchmarking suite will empower researchers to test their own pipelines. Crucially, it will help them identify and diagnose inconsistencies when moving workflows between different machines, such as from a local laptop to a cloud-based GPU.&lt;/li>
&lt;li>&lt;strong>Establish New Best Practices:&lt;/strong> We will distill our findings into clear, actionable guidance. The key recommendation is no longer just about choosing the right algorithm, but ensuring a consistent and well-documented hardware and software environment to guarantee reproducible outcomes.&lt;/li>
&lt;/ol>
&lt;p>By following the evidence, we’ve uncovered the root cause of a critical problem in AI-driven research. We are now developing the solutions needed to manage it, paving the way for a future where scientific discoveries powered by AI are built on a foundation of verifiable trust.&lt;/p></description></item><item><title>Midway Through GSoC</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/ucsc/embeddings/14072025-devadigapratham/</link><pubDate>Mon, 14 Jul 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/ucsc/embeddings/14072025-devadigapratham/</guid><description>&lt;h1 id="midway-through-gsoc">Midway Through GSoC&lt;/h1>
&lt;p>Hello everyone! I’m Pratham Devadiga, and I’m thrilled to share a midterm progress update on my &lt;a href="https://summerofcode.withgoogle.com/programs/2025/projects/GcstSGAO" target="_blank" rel="noopener">GSoC 2025 project&lt;/a> with the Open Source Research Experience (OSRE). My project is focused on building the &lt;strong>first open-source billion-scale vector embeddings dataset&lt;/strong> from &lt;strong>real-world open source code&lt;/strong> to support benchmarking of Approximate Nearest Neighbor (ANN) algorithms and facilitate research in Retrieval-Augmented Generation (RAG).&lt;/p>
&lt;h2 id="project-overview">Project Overview&lt;/h2>
&lt;p>The goal of this project is to address a critical gap in the ecosystem: existing ANN benchmarks are either synthetic or limited in scale. With the explosion of code-focused LLMs and embedding models, there&amp;rsquo;s a pressing need for:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>High-volume, high-dimensional vector datasets&lt;/strong> built from real-world data (open-source codebases).&lt;/li>
&lt;li>&lt;strong>Open, reproducible benchmarks&lt;/strong> that reflect realistic RAG workloads.&lt;/li>
&lt;li>A dataset that can be used to evaluate &lt;strong>ANN libraries&lt;/strong> like FAISS, HNSW, and Annoy on massive and practical retrieval tasks.&lt;/li>
&lt;/ul>
&lt;p>Our approach is to use high-quality open-source code repositories to extract meaningful code chunks, encode them into vector embeddings using open models, and make these datasets publicly available with metadata for downstream benchmarking and analysis.&lt;/p>
&lt;h2 id="progress-so-far">Progress So Far&lt;/h2>
&lt;p>We’ve made substantial foundational progress in the first half of the coding period. Key highlights:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Tested multiple embedding models&lt;/strong> such as &lt;code>codeBERT&lt;/code>, &lt;code>MiniLM-L6-v2&lt;/code>, and &lt;code>all-mpnet-base-v2&lt;/code>, evaluating trade-offs in speed, dimensionality, and GPU memory.&lt;/li>
&lt;li>&lt;strong>Selected &lt;code>codebert-base&lt;/code>&lt;/strong> (768d) as the current model for phase one due to its stable performance and manageable resource footprint.&lt;/li>
&lt;li>Implemented and validated a complete &lt;strong>script pipeline&lt;/strong> to:
&lt;ul>
&lt;li>Traverse large open-source repositories.&lt;/li>
&lt;li>Extract and chunk code intelligently (functions, classes, modules).&lt;/li>
&lt;li>Encode code into embeddings and attach metadata (repo, file path, license).&lt;/li>
&lt;li>Store results efficiently in parquet and NumPy formats.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Tested all components&lt;/strong> of the pipeline on sample datasets using multi-GPU setups, ensuring compatibility and robustness.&lt;/li>
&lt;/ul>
&lt;h2 id="challenges-and-learnings">Challenges and Learnings&lt;/h2>
&lt;p>Building a billion-scale dataset from real-world codebases is no small task. Here&amp;rsquo;s what we’ve encountered and learned along the way:&lt;/p>
&lt;h3 id="1-multi-gpu-pipeline-design">1. Multi-GPU Pipeline Design&lt;/h3>
&lt;p>Naively parallelizing the embedding process caused memory overflow and deadlocks due to model reloading across processes. We refactored the code using &lt;code>torch.multiprocessing&lt;/code> and pinned GPU contexts to avoid such issues, improving throughput on multi-GPU machines.&lt;/p>
&lt;h3 id="2-embedding-trade-offs">2. Embedding Trade-offs&lt;/h3>
&lt;p>We experimented with larger models but found that their generation time and memory use were too high to be practical in early phases. This helped us narrow down to scalable configurations for initial dataset generation.&lt;/p>
&lt;h3 id="3-preparing-for-scale">3. Preparing for Scale&lt;/h3>
&lt;p>Although the embeddings are not generated yet, all scripts are now &lt;strong>modular, parallelized, and reproducible&lt;/strong>, ensuring a smooth transition to billion-scale data generation in the second half.&lt;/p>
&lt;h2 id="whats-next">What’s Next&lt;/h2>
&lt;p>The second half of the project will focus on:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Scaling up embedding generation&lt;/strong> to &amp;gt;1B code chunks across hundreds of open-source repositories.&lt;/li>
&lt;li>&lt;strong>Running benchmarks&lt;/strong> using FAISS, HNSW, and Annoy on these embeddings.&lt;/li>
&lt;li>&lt;strong>Releasing the dataset&lt;/strong> on Hugging Face and AWS S3 with sharded access and metadata.&lt;/li>
&lt;li>&lt;strong>Writing a detailed benchmarking report&lt;/strong> comparing speed, accuracy, and memory trade-offs across ANN algorithms.&lt;/li>
&lt;/ul>
&lt;h2 id="final-thoughts">Final Thoughts&lt;/h2>
&lt;p>This journey so far has taught me a lot about building large-scale ML pipelines, managing real-world compute constraints, and ensuring reproducibility for research-grade datasets. I&amp;rsquo;m grateful to my mentor &lt;strong>Jayjeet Chakraborty&lt;/strong> and the OSRE team for their continuous support and guidance.&lt;/p>
&lt;p>Excited for the next half, where the real scale begins!&lt;/p>
&lt;p>Stay tuned for updates. You can find more about the project on my &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/ucsc/embeddings">OSRE project page&lt;/a>.&lt;/p></description></item><item><title>Enhancing Reproducibility in RAG Frameworks for Scientific Workflows</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/pnnl/llm_rag_reproducibility/20250625-wbq321/</link><pubDate>Wed, 25 Jun 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/pnnl/llm_rag_reproducibility/20250625-wbq321/</guid><description>&lt;p>Hello, I&amp;rsquo;m Baiqiang. As part of the &lt;a href="https://ucsc-ospo.github.io/project/osre25/pnnl/llm_rag_reproducibility/" target="_blank" rel="noopener">Enhancing Reproducibility in RAG Frameworks for Scientific Workflows&lt;/a> project, I am excited to introduce my work on a crucial challenge in modern computational science. My &lt;a href="https://www.overleaf.com/read/fcbxtpngdnhw#8cc2c8" target="_blank" rel="noopener">proposal&lt;/a> under the mentorship of Luanzheng &amp;ldquo;Lenny&amp;rdquo; Guo at Pacific Northwest National Laboratory and Dongfang Zhao at the University of Washington aims to enhance the reproducibility of AI-driven scientific workflows.&lt;/p>
&lt;h3 id="the-problem-a-crisis-of-confidence-in-ai-for-science">The Problem: A Crisis of Confidence in AI for Science&lt;/h3>
&lt;p>Large Language Models (LLMs) are transforming scientific research, from accelerating literature reviews to generating novel hypotheses. However, their power is matched by their pitfalls: a tendency to &amp;ldquo;hallucinate&amp;rdquo; facts and a lack of transparency. Retrieval-Augmented Generation (RAG) was developed as a powerful solution, grounding LLM outputs in factual evidence retrieved from a specific knowledge base (like a database of scientific papers).&lt;/p>
&lt;p>But a hidden problem lurks within RAG: &lt;strong>non-determinism&lt;/strong>. The very first step of a RAG system—the similarity search that finds relevant documents—can produce different results even when asked the same question. Variations in indexing algorithms, data updates, or even the underlying software can change which documents are retrieved. For science, this is a critical flaw. If an experiment cannot be repeated with the same results, its conclusions cannot be trusted. This project tackles that challenge head-on.&lt;/p>
&lt;h3 id="our-mission-forging-a-path-to-reproducible-rag">Our Mission: Forging a Path to Reproducible RAG&lt;/h3>
&lt;p>This project proposes a comprehensive solution to systematically identify, measure, and mitigate non-determinism in RAG frameworks. Our goal is to empower researchers to build and use AI tools with confidence.&lt;/p>
&lt;p>Our approach is built on four key pillars:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Systematic Analysis:&lt;/strong> We will conduct a deep dive into popular RAG components (like FAISS, ScaNN, and HNSW) to pinpoint the exact sources of randomness and variability.&lt;/li>
&lt;li>&lt;strong>Rigorous Benchmarking:&lt;/strong> We will develop a public, open-source benchmarking suite using standardized scientific datasets (from PubMed, arXiv, etc.). This will allow anyone to quantitatively measure the reproducibility of their own RAG pipeline using clear metrics like retrieval overlap and rank correlation.&lt;/li>
&lt;li>&lt;strong>Targeted Enhancements:&lt;/strong> Based on our findings, we will implement practical solutions, including:
&lt;ul>
&lt;li>Promoting deterministic algorithms and configurations.&lt;/li>
&lt;li>Building robust data versioning and provenance tracking tools (inspired by DVC and Git LFS).&lt;/li>
&lt;li>Creating tools for precise configuration management to capture the entire experimental setup.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Practical Guidance and Open Source Tools:&lt;/strong> We will distill our insights into comprehensive documentation, reusable code examples, and best practices. All tools and findings will be contributed back to the open-source community.&lt;/li>
&lt;/ol></description></item><item><title>Building a Billion-Scale Vector Embeddings Dataset</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/ucsc/embeddings/14062025-devadigapratham/</link><pubDate>Sun, 15 Jun 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/ucsc/embeddings/14062025-devadigapratham/</guid><description>&lt;h1 id="billion-vector-embeddings-dataset">Billion Vector Embeddings Dataset&lt;/h1>
&lt;p>As part of the &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/ucsc/embeddings">Billion-Scale Embeddings Dataset project&lt;/a>, my &lt;a href="GSoC-proposal.pdf">proposal&lt;/a> under the mentorship of &lt;strong>Jayjeet Chakraborty&lt;/strong> aims to create the first large-scale, real-world vector embeddings dataset—bridging the critical gap in Approximate Nearest Neighbor (ANN) benchmarks and Retrieval-Augmented Generation (RAG) systems.&lt;/p>
&lt;h2 id="motivation">Motivation&lt;/h2>
&lt;p>Existing ANN benchmarks often fall short—they’re either synthetic (like SIFT) or too small-scale (≤1M vectors). With the rapid evolution of LLM-based vector search systems (e.g., OpenAI’s 3072d &lt;code>text-embedding-3-large&lt;/code>), there&amp;rsquo;s a growing need for:&lt;/p>
&lt;ul>
&lt;li>High-dimensional (&amp;gt;1000d), large-scale (&amp;gt;100M) embeddings&lt;/li>
&lt;li>Real-world distributions (Wikipedia-scale text)&lt;/li>
&lt;li>Open, reproducible benchmarks for the community&lt;/li>
&lt;/ul>
&lt;h2 id="project-goals">Project Goals&lt;/h2>
&lt;ul>
&lt;li>Generate &lt;strong>1 billion&lt;/strong> embeddings from English Wikipedia using open-source models.&lt;/li>
&lt;li>Create multiple dimensional variants: &lt;strong>1024d&lt;/strong>, &lt;strong>4096d&lt;/strong>, and &lt;strong>8192d&lt;/strong>.&lt;/li>
&lt;li>Deduplicate, compress, and store embeddings with rich metadata (URL, timestamps, models).&lt;/li>
&lt;li>Benchmark ANN performance on FAISS, HNSW, and Annoy.&lt;/li>
&lt;li>Distribute the dataset via HuggingFace &amp;amp; AWS S3 with shard-level access.&lt;/li>
&lt;/ul>
&lt;h2 id="open-source-impact">Open Source Impact&lt;/h2>
&lt;ul>
&lt;li>&lt;strong>ANN Libraries&lt;/strong>: Enable reproducible benchmarking for real-world workloads.&lt;/li>
&lt;li>&lt;strong>RAG Systems&lt;/strong>: Evaluate and optimize retrieval at scale using real Wikipedia text.&lt;/li>
&lt;li>&lt;strong>Researchers&lt;/strong>: Conduct large-scale studies on dimensionality, ANN accuracy, and compression trade-offs.&lt;/li>
&lt;/ul>
&lt;hr></description></item><item><title>Enhancing Reproducibility in RAG Frameworks for Scientific Workflows</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/pnnl/llm_rag_reproducibility/</link><pubDate>Thu, 20 Feb 2025 09:00:00 -0700</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/pnnl/llm_rag_reproducibility/</guid><description>&lt;p>Retrieval-Augmented Generation (RAG) frameworks, which merge the capabilities of retrieval systems and generative models, significantly enhance the relevance and accuracy of responses produced by large language models (LLMs). These frameworks retrieve relevant documents from a large corpus and use these documents to inform the generative process, thereby improving the contextuality and precision of the generated content. Ensuring reproducibility in data queries using similarity search within these RAG frameworks is critical for maintaining the reliability and consistency of scientific workflows. Reproducibility ensures that the same input query consistently yields the same output, which is vital for scientific tasks that rely on precise and repeatable results. Inconsistencies can arise from various sources, affecting the trustworthiness of scientific outcomes. Differences in retrieval algorithms can lead to variable sets of documents being retrieved for the same query. Variations in data indexing methods can cause inconsistencies in how documents are ranked and accessed. The stochastic nature of LLM operations introduces an element of randomness in the generative process. Updates in datasets can also alter the baseline against which queries are processed and interpreted, leading to different results over time.&lt;/p>
&lt;p>This proposal aims to address these reproducibility challenges in similarity searches within RAG frameworks. This work involves analyzing the root causes of non-determinism, benchmarking and validating the consistency of query results, implementing enhancements to minimize variability, and developing tools and best practices to ensure reproducibility. Reproducibility in data queries can be influenced by several factors, including updates in datasets, differences in retrieval algorithms, varying data indexing methods, and the stochastic nature of LLM operations. Each of these factors can cause variability in the documents retrieved and in the generated responses. Ensuring consistency in query results across different runs is crucial for maintaining the integrity of LLM-driven scientific research, allowing researchers to confidently build upon prior work and achieve reliable, trustworthy outcomes.&lt;/p>
&lt;h3 id="workplan">Workplan&lt;/h3>
&lt;p>The proposed work will include: (1) Identifying sources of non-determinism and variability, such as algorithmic differences and indexing methods, in RAG; (2) Utilizing standardized scientific datasets to benchmark the reproducibility of similarity search results across different RAG frameworks; (3) Establishing protocols for handling dataset updates to ensure that such changes do not impact the reproducibility of similarity search results; and (4) Implementing mechanisms to track and document updates to datasets, ensuring that changes are reflected consistently across all instances of the RAG framework. By addressing these areas, the proposed work aims to mitigate challenges related to reproducibility in similarity search queries within RAG frameworks, ultimately enhancing the reliability and trustworthiness of scientific research outcomes.&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Topics:&lt;/strong> &lt;code>Reproducibility&lt;/code> &lt;code>LLM&lt;/code> &lt;code>RAG&lt;/code> &lt;code>Scientific Workflows&lt;/code>&lt;/li>
&lt;li>&lt;strong>Skills:&lt;/strong> C/C++, Python&lt;/li>
&lt;li>&lt;strong>Difficulty:&lt;/strong> Medium&lt;/li>
&lt;li>&lt;strong>Size:&lt;/strong> Large (350 hours)&lt;/li>
&lt;li>&lt;strong>Mentors:&lt;/strong> &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/luanzheng-lenny-guo/">Luanzheng &amp;quot;Lenny&amp;quot; Guo&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>RAG-ST: Retrieval-Augmented Generation for Spatial Transcriptomics</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/uci/rag-st/</link><pubDate>Wed, 15 Jan 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/uci/rag-st/</guid><description>&lt;ul>
&lt;li>&lt;strong>Topics:&lt;/strong> bioinformatics, spatial transcriptomics, gene expression generation, retrieval-augmented generation, large models&lt;/li>
&lt;li>&lt;strong>Skills:&lt;/strong>
&lt;ul>
&lt;li>&lt;strong>Programming Languages:&lt;/strong>
&lt;ul>
&lt;li>Proficient in Python, and familiarity with machine learning libraries such as PyTorch.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Data Analysis:&lt;/strong>
&lt;ul>
&lt;li>Experience with spatial transcriptomics datasets and statistical modeling.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Machine Learning:&lt;/strong>
&lt;ul>
&lt;li>Understanding of vision models, retrieval-based systems, and MLP architectures.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Bioinformatics Knowledge (preferred):&lt;/strong>
&lt;ul>
&lt;li>Familiarity with scRNA-seq data integration and computational biology tools.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Difficulty:&lt;/strong> Advanced&lt;/li>
&lt;li>&lt;strong>Size:&lt;/strong> Large (350 hours). Given the scope of integrating RAG models, building a robust database, and ensuring interpretable predictions, this project involves substantial computational and data preparation work.&lt;/li>
&lt;li>&lt;strong>Mentors:&lt;/strong> &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/ziheng-duan/">Ziheng Duan&lt;/a> (contact person)&lt;/li>
&lt;/ul>
&lt;h3 id="project-idea-description">&lt;strong>Project Idea Description&lt;/strong>&lt;/h3>
&lt;p>Spatial transcriptomics (ST) is a revolutionary technology that provides spatially resolved gene expression measurements, enabling researchers to study cellular behaviour within tissues with unprecedented detail. This technology has transformed our understanding of complex biological systems, such as disease progression, tissue development, and cellular heterogeneity. However, the widespread adoption of ST is limited by its high cost and technical requirements.&lt;/p>
&lt;p>Histology imaging, on the other hand, is far more accessible and cost-effective. If gene expression could be accurately predicted from histology images, it would enable researchers to leverage these abundant images for high-resolution biological insights without the need for expensive spatial transcriptomics experiments. This task has immense potential to democratize spatial transcriptomics research and significantly reduce costs.&lt;/p>
&lt;h3 id="challenges-in-current-approaches">&lt;strong>Challenges in Current Approaches&lt;/strong>&lt;/h3>
&lt;p>Current methods for predicting gene expression from histology images typically involve:&lt;/p>
&lt;ol>
&lt;li>Using large vision models to encode histology image patches into embeddings.&lt;/li>
&lt;li>Employing Multi-Layer Perceptrons (MLPs) to map these embeddings to gene expression profiles.&lt;/li>
&lt;/ol>
&lt;p>While these approaches have shown promise, they suffer from two critical limitations:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Accuracy&lt;/strong>: The MLP-based mappings often fail to fully capture the biological complexity encoded in the histology images, leading to suboptimal predictions.&lt;/li>
&lt;li>&lt;strong>Interpretability&lt;/strong>: These models act as black boxes, providing no insight into the underlying biological rationale for the predictions. Researchers cannot determine why a specific gene expression profile was generated, limiting trust and utility in biological contexts.&lt;/li>
&lt;/ul>
&lt;h3 id="project-motivation">&lt;strong>Project Motivation&lt;/strong>&lt;/h3>
&lt;p>To overcome these limitations, this project proposes a novel &lt;strong>Retrieval-Augmented Generation (RAG)&lt;/strong> framework for spatial transcriptomics. Instead of relying solely on black-box MLPs, RAG-ST will:&lt;/p>
&lt;ul>
&lt;li>Retrieve relevant examples from a curated database of paired histology images, scRNA-seq data, and gene expression profiles.&lt;/li>
&lt;li>Use these retrieved examples to inform and enhance the generation process, resulting in predictions that are both more accurate and biologically interpretable.&lt;/li>
&lt;/ul>
&lt;p>This approach not only grounds predictions in biologically meaningful data but also provides transparency by revealing which database entries influenced the results.&lt;/p>
&lt;h3 id="project-objectives">&lt;strong>Project Objectives&lt;/strong>&lt;/h3>
&lt;ol>
&lt;li>&lt;strong>Database Construction&lt;/strong>:
&lt;ul>
&lt;li>Curate a large and diverse database of histology images paired with scRNA-seq and gene expression data.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Model Development&lt;/strong>:
&lt;ul>
&lt;li>Develop a RAG framework combining vision-based encoders and retrieval-enhanced generation techniques.&lt;/li>
&lt;li>Incorporate interpretability mechanisms to link predicted gene expressions to retrieved examples.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Evaluation and Benchmarking&lt;/strong>:
&lt;ul>
&lt;li>Assess RAG-ST against state-of-the-art methods, focusing on accuracy, interpretability, and biological validity.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ol>
&lt;h3 id="project-deliverables">&lt;strong>Project Deliverables&lt;/strong>&lt;/h3>
&lt;ol>
&lt;li>&lt;strong>Curated Database&lt;/strong>:
&lt;ul>
&lt;li>A publicly available, well-documented database of histology images and gene expression profiles.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>RAG-ST Framework&lt;/strong>:
&lt;ul>
&lt;li>An open-source Python implementation of the RAG-ST model, with retrieval, generation, and visualization tools.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Benchmark Results&lt;/strong>:
&lt;ul>
&lt;li>Comprehensive evaluations demonstrating the benefits of RAG-ST over conventional pipelines.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Documentation and Tutorials&lt;/strong>:
&lt;ul>
&lt;li>User-friendly guides to facilitate adoption by the spatial transcriptomics research community.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ol>
&lt;h3 id="impact">&lt;strong>Impact&lt;/strong>&lt;/h3>
&lt;p>By integrating retrieval-augmented generation with large models, RAG-ST represents a paradigm shift in spatial transcriptomics. It offers a cost-effective, accurate, and interpretable solution for gene expression prediction, democratizing access to high-quality spatial transcriptomic insights and fostering advancements in biological research.&lt;/p>
&lt;hr></description></item></channel></rss>