<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>computational pathology | UCSC OSPO</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/tag/computational-pathology/</link><atom:link href="https://deploy-preview-1007--ucsc-ospo.netlify.app/tag/computational-pathology/index.xml" rel="self" type="application/rss+xml"/><description>computational pathology</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Thu, 29 Jan 2026 00:00:00 +0000</lastBuildDate><item><title>Omni-ST: Instruction-Driven Any-to-Any Multimodal Modeling for Spatial Transcriptomics</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre26/uci-ics/omni-st/</link><pubDate>Thu, 29 Jan 2026 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre26/uci-ics/omni-st/</guid><description>&lt;h2 id="project-description">Project description&lt;/h2>
&lt;p>Spatial transcriptomics (ST) integrates spatially resolved gene expression with tissue morphology, enabling the study of cellular organization, tissue architecture, and disease microenvironments. Modern ST datasets are inherently multimodal, combining histology images (H&amp;amp;E / IF), gene expression vectors, spatial graphs, cell annotations, and free-text pathology descriptions.&lt;/p>
&lt;p>However, most existing ST methods are task-specific and modality-siloed: separate models are trained for image-to-gene prediction, spatial domain identification, cell type classification, or text-based interpretation. This fragmentation limits cross-task generalization and scalability.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Omni-ST overview" srcset="
/project/osre26/uci-ics/omni-st/omni-st-overview_hu23ddd3d57afcbc47e213a42520991f5c_1307894_4023f7915e2a557bcacee3aecd015061.webp 400w,
/project/osre26/uci-ics/omni-st/omni-st-overview_hu23ddd3d57afcbc47e213a42520991f5c_1307894_8d4e33b30dc811f95fb70a843df58532.webp 760w,
/project/osre26/uci-ics/omni-st/omni-st-overview_hu23ddd3d57afcbc47e213a42520991f5c_1307894_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre26/uci-ics/omni-st/omni-st-overview_hu23ddd3d57afcbc47e213a42520991f5c_1307894_4023f7915e2a557bcacee3aecd015061.webp"
width="760"
height="664"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>&lt;strong>Omni-ST&lt;/strong> proposes a single &lt;strong>instruction-driven any-to-any multimodal backbone&lt;/strong> that treats each spatial transcriptomics modality as a “language” and formulates all tasks as:&lt;/p>
&lt;p>&lt;strong>Instruction + Input Modality → Output Modality&lt;/strong>&lt;/p>
&lt;p>Natural language is elevated from auxiliary metadata to a &lt;strong>unifying interface&lt;/strong> that specifies task intent, target modality, and biological context. This paradigm enables flexible, interpretable, and extensible spatial reasoning within a single model.&lt;/p>
&lt;hr>
&lt;h3 id="project-idea-instruction-driven-any-to-any-modeling-for-spatial-transcriptomics">Project Idea: Instruction-Driven Any-to-Any Modeling for Spatial Transcriptomics&lt;/h3>
&lt;p>&lt;strong>Topics:&lt;/strong> spatial transcriptomics, multimodal learning, instruction tuning, computational pathology&lt;br>
&lt;strong>Skills:&lt;/strong> PyTorch, deep learning, Transformers, multimodal representation learning&lt;br>
&lt;strong>Difficulty:&lt;/strong> Hard&lt;br>
&lt;strong>Size:&lt;/strong> 350 hours&lt;/p>
&lt;p>&lt;strong>Mentor:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Xi Li&lt;/strong> — &lt;a href="mailto:xil43@uci.edu">mailto:xil43@uci.edu&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Essential information:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>Design a unified multimodal backbone with lightweight modality adapters for histology images, gene expression vectors, spatial graphs, and text.&lt;/li>
&lt;li>Use natural language instructions to condition model behavior, enabling any-to-any translation without task-specific heads.&lt;/li>
&lt;li>Support core tasks including image → gene expression prediction, gene expression → cell type / spatial domain identification, region → text-based biological explanation, and text-based spatial retrieval.&lt;/li>
&lt;li>Evaluate the model across multiple spatial transcriptomics tasks within a single framework, emphasizing generalization and interpretability.&lt;/li>
&lt;li>Develop visualization and interpretation tools such as spatial maps and language-grounded explanations.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Expected deliverables:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>An open-source PyTorch implementation of the Omni-ST framework.&lt;/li>
&lt;li>Unified multitask benchmarks for spatial transcriptomics.&lt;/li>
&lt;li>Visualization and interpretation tools for spatial predictions.&lt;/li>
&lt;li>Documentation and tutorials demonstrating how to add new tasks via instructions.&lt;/li>
&lt;/ul></description></item><item><title>HistoMoE: A Histology-Guided Mixture-of-Experts Framework for Gene Expression Prediction</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre26/uci/histomoe/</link><pubDate>Tue, 20 Jan 2026 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre26/uci/histomoe/</guid><description>&lt;ul>
&lt;li>&lt;strong>Topics:&lt;/strong> computational pathology, spatial transcriptomics, gene expression prediction, mixture-of-experts, multimodal learning&lt;/li>
&lt;li>&lt;strong>Skills:&lt;/strong>
&lt;ul>
&lt;li>&lt;strong>Programming Languages:&lt;/strong> Python; experience with PyTorch preferred&lt;/li>
&lt;li>&lt;strong>Machine Learning:&lt;/strong> CNNs / vision encoders, mixture-of-experts, multimodal representation learning&lt;/li>
&lt;li>&lt;strong>Data Analysis:&lt;/strong> handling large-scale histology image patches and gene expression matrices&lt;/li>
&lt;li>&lt;strong>Bioinformatics Knowledge (preferred):&lt;/strong> familiarity with spatial transcriptomics or scRNA-seq data&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Difficulty:&lt;/strong> Advanced&lt;/li>
&lt;li>&lt;strong>Size:&lt;/strong> Large (350 hours)&lt;/li>
&lt;li>&lt;strong>Mentors:&lt;/strong> &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/ziheng-duan/">Ziheng Duan&lt;/a> (contact person)&lt;/li>
&lt;/ul>
&lt;h3 id="project-idea-description">&lt;strong>Project Idea Description&lt;/strong>&lt;/h3>
&lt;p>Histology imaging is one of the most widely available data modalities in biomedical research and clinical practice, capturing rich morphological information about tissues and disease states. In parallel, spatial transcriptomics (ST) technologies provide spatially resolved gene expression measurements, enabling unprecedented insights into tissue organization and cellular heterogeneity. However, the high cost and limited accessibility of ST experiments remain a major barrier to their widespread adoption.&lt;/p>
&lt;p>Predicting gene expression directly from histology images offers a promising alternative, enabling molecular-level inference from routinely collected pathology data. Existing approaches typically rely on a single global model that maps image embeddings to gene expression profiles. While effective to some extent, these models struggle to capture the strong organ-, tissue-, and cancer-specific heterogeneity that underlies gene expression patterns.&lt;/p>
&lt;p>This project proposes &lt;strong>HistoMoE&lt;/strong>, a &lt;strong>histology-guided mixture-of-experts (MoE) framework&lt;/strong> that explicitly models biological heterogeneity by learning &lt;strong>specialized expert models&lt;/strong> for different cancer types or organs, and dynamically routing histology image patches to the most relevant experts.&lt;/p>
&lt;h3 id="key-idea-and-technical-approach">&lt;strong>Key Idea and Technical Approach&lt;/strong>&lt;/h3>
&lt;p>As illustrated in the figure above, HistoMoE integrates multiple data modalities and learning components:&lt;/p>
&lt;ol>
&lt;li>
&lt;p>&lt;strong>Vision Encoder&lt;/strong>&lt;br>
Histology image patches are encoded into high-dimensional visual representations using a convolutional or transformer-based vision backbone.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Text / Metadata Encoder&lt;/strong>&lt;br>
Sample-level metadata (e.g., tissue type, organ, disease context) is encoded using a lightweight text or embedding model.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Gating Network&lt;/strong>&lt;br>
A gating network jointly considers image and metadata embeddings to infer routing weights over multiple &lt;strong>cancer- or organ-specific expert models&lt;/strong>.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Expert Models&lt;/strong>&lt;br>
Each expert specializes in modeling gene expression patterns for a specific biological context (e.g., CCRCC, COAD, LUAD), producing patch-level gene expression predictions.&lt;/p>
&lt;/li>
&lt;/ol>
&lt;p>By explicitly modeling biological structure through expert specialization, HistoMoE aims to improve both &lt;strong>prediction accuracy&lt;/strong> and &lt;strong>interpretability&lt;/strong>, allowing researchers to understand which biological experts drive each prediction.&lt;/p>
&lt;h3 id="project-objectives">&lt;strong>Project Objectives&lt;/strong>&lt;/h3>
&lt;ol>
&lt;li>&lt;strong>Design and Implement the HistoMoE Framework&lt;/strong>
&lt;ul>
&lt;li>Build a modular MoE architecture with pluggable vision encoders, gating networks, and expert models.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Multimodal Routing and Expert Specialization&lt;/strong>
&lt;ul>
&lt;li>Explore how image features and metadata jointly inform expert selection.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Benchmarking and Evaluation&lt;/strong>
&lt;ul>
&lt;li>Compare HistoMoE against single-model baselines on multiple cancer and organ-specific spatial transcriptomics datasets.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Interpretability Analysis&lt;/strong>
&lt;ul>
&lt;li>Analyze expert routing behavior to reveal biologically meaningful patterns.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ol>
&lt;h3 id="project-deliverables">&lt;strong>Project Deliverables&lt;/strong>&lt;/h3>
&lt;ol>
&lt;li>&lt;strong>Open-Source HistoMoE Codebase&lt;/strong>
&lt;ul>
&lt;li>Well-documented Python implementation with training, evaluation, and visualization tools.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Benchmark Results&lt;/strong>
&lt;ul>
&lt;li>Quantitative comparisons demonstrating improvements over non-expert baselines.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Visualization and Analysis Tools&lt;/strong>
&lt;ul>
&lt;li>Tools for inspecting expert usage, routing weights, and gene-level predictions.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Documentation and Tutorials&lt;/strong>
&lt;ul>
&lt;li>Clear instructions and examples to enable adoption by the research community.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ol>
&lt;h3 id="impact">&lt;strong>Impact&lt;/strong>&lt;/h3>
&lt;p>HistoMoE introduces an expert-system perspective to histology-based gene expression prediction, bridging morphological and molecular representations through biologically informed specialization. By combining multimodal learning with mixture-of-experts modeling, this project advances the interpretability and accuracy of computational pathology methods and contributes toward scalable, cost-effective alternatives to spatial transcriptomics experiments.&lt;/p></description></item><item><title>Disentangled Generation and Editing of Pathology Images</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/uci/pathology_image_disentanglement/</link><pubDate>Fri, 07 Feb 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/uci/pathology_image_disentanglement/</guid><description>&lt;ul>
&lt;li>&lt;strong>Topics:&lt;/strong> computational pathology, image generation, disentangled representations, latent space manipulation, deep learning&lt;/li>
&lt;li>&lt;strong>Skills:&lt;/strong>
&lt;ul>
&lt;li>&lt;strong>Programming Languages:&lt;/strong>
&lt;ul>
&lt;li>Proficient in Python, with experience in machine learning libraries such as PyTorch or TensorFlow.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Generative Models:&lt;/strong>
&lt;ul>
&lt;li>Familiarity with Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and contrastive learning methods.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Data Analysis:&lt;/strong>
&lt;ul>
&lt;li>Image processing techniques, statistical analysis, and working with histopathology datasets.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Biomedical Knowledge (preferred):&lt;/strong>
&lt;ul>
&lt;li>Basic understanding of histology, cancer pathology, and biological image annotation.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Difficulty:&lt;/strong> Advanced&lt;/li>
&lt;li>&lt;strong>Size:&lt;/strong> Large (350 hours). The project involves substantial computational work, model development, and evaluation of generated pathology images.&lt;/li>
&lt;li>&lt;strong>Mentors:&lt;/strong> &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/xi-li/">Xi Li&lt;/a> (contact person), Mentor Name&lt;/li>
&lt;/ul>
&lt;h3 id="project-idea-description">&lt;strong>Project Idea Description&lt;/strong>&lt;/h3>
&lt;p>The project aims to advance the &lt;strong>generation and disentanglement of pathology images&lt;/strong>, focusing on precise control over key histological features. By leveraging generative models, we seek to create synthetic histological images where specific pathological characteristics can be independently controlled.&lt;/p>
&lt;h3 id="challenges-in-current-approaches">&lt;strong>Challenges in Current Approaches&lt;/strong>&lt;/h3>
&lt;p>Current methods in histopathology image generation often struggle with:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Feature Entanglement:&lt;/strong> Difficulty in isolating individual factors such as cancer presence, severity, or staining variations.&lt;/li>
&lt;li>&lt;strong>Lack of Control:&lt;/strong> Limited capability to manipulate specific pathological attributes without affecting unrelated features.&lt;/li>
&lt;li>&lt;strong>Consistency Issues:&lt;/strong> Generated images often fail to maintain realistic cellular distributions, affecting biological validity.&lt;/li>
&lt;/ol>
&lt;h3 id="project-motivation">&lt;strong>Project Motivation&lt;/strong>&lt;/h3>
&lt;p>This project proposes a &lt;strong>disentangled representation framework&lt;/strong> to address these limitations. By separating key features within the latent space, we aim to:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Control Histological Features:&lt;/strong> Adjust factors such as cancer presence, tumor grade, number of malignant cells, and staining methods.&lt;/li>
&lt;li>&lt;strong>Ensure Spatial Consistency:&lt;/strong> Maintain the natural distribution of cells during image reconstruction and editing.&lt;/li>
&lt;li>&lt;strong>Enable Latent Space Manipulation:&lt;/strong> Provide interpretable controls for editing and generating realistic histopathology images.&lt;/li>
&lt;/ul>
&lt;h3 id="project-objectives">&lt;strong>Project Objectives&lt;/strong>&lt;/h3>
&lt;ol>
&lt;li>&lt;strong>Disentangled Representation Learning:&lt;/strong>
&lt;ul>
&lt;li>Develop generative models (e.g., VAEs, GANs) to separate and control histological features.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Latent Space Manipulation:&lt;/strong>
&lt;ul>
&lt;li>Design mechanisms for intuitive editing of pathology images through latent space adjustments.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Spatial Consistency Validation:&lt;/strong>
&lt;ul>
&lt;li>Implement evaluation metrics to ensure that cell distribution remains biologically consistent during image generation.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ol>
&lt;h3 id="project-deliverables">&lt;strong>Project Deliverables&lt;/strong>&lt;/h3>
&lt;ol>
&lt;li>&lt;strong>Generative Model Framework:&lt;/strong>
&lt;ul>
&lt;li>An open-source Python implementation for pathology image generation and editing.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Disentangled Latent Space Tools:&lt;/strong>
&lt;ul>
&lt;li>Tools for visualizing and manipulating latent spaces to control specific pathological features.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Evaluation Metrics:&lt;/strong>
&lt;ul>
&lt;li>Comprehensive benchmarks assessing image quality, feature disentanglement, and biological realism.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>&lt;strong>Documentation and Tutorials:&lt;/strong>
&lt;ul>
&lt;li>Clear guidelines and code examples for the research community to adopt and build upon this work.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ol>
&lt;h3 id="impact">&lt;strong>Impact&lt;/strong>&lt;/h3>
&lt;p>By enabling precise control over generated histology images, this project will contribute to &lt;strong>data augmentation&lt;/strong>, &lt;strong>model interpretability&lt;/strong>, and &lt;strong>biological insight&lt;/strong> in computational pathology. The disentangled approach offers new opportunities for researchers to explore disease mechanisms, develop robust diagnostic models, and improve our understanding of cancer progression and tissue morphology.&lt;/p>
&lt;hr></description></item></channel></rss>