<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>James Davis | UCSC OSPO</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/author/james-davis/</link><atom:link href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/james-davis/index.xml" rel="self" type="application/rss+xml"/><description>James Davis</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><item><title>FairFace</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/ucsc/fair-face/</link><pubDate>Fri, 28 Feb 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/ucsc/fair-face/</guid><description>&lt;h3 id="fairface-reproducible-bias-evaluation-in-facial-ai-models-via-controlled-skin-tone-manipulation">FairFace: Reproducible Bias Evaluation in Facial AI Models via Controlled Skin Tone Manipulation&lt;/h3>
&lt;p>Bias in facial AI models remains a persistent issue, particularly concerning skin tone disparities. Many studies report that AI models perform differently on lighter vs. darker skin tones, but these findings are often difficult to reproduce due to variations in datasets, model architectures, and evaluation settings.
The goal of this project is to investigate bias in facial AI models by manipulating skin tone and related properties in a controlled, reproducible manner. By leveraging BioSkin, we will adjust melanin levels and other skin properties on existing human datasets to assess whether face-based AI models (e.g., classification and vision-language models) exhibit biased behavior toward specific skin tones.&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Topics:&lt;/strong> &lt;code>Fairness &amp;amp; Bias in AI&lt;/code>, &lt;code>Face Recognition &amp;amp; Vision-Language Models&lt;/code>, &lt;code>Dataset Augmentation for Reproducibility&lt;/code>&lt;/li>
&lt;li>&lt;strong>Skills:&lt;/strong> Machine Learning &amp;amp; Computer Vision, Deep Learning (PyTorch/TensorFlow), Data Augmentation &amp;amp; Image Processing, Reproducibility &amp;amp; Documentation (GitHub, Jupyter Notebooks).&lt;/li>
&lt;li>&lt;strong>Difficulty:&lt;/strong> Moderate&lt;/li>
&lt;li>&lt;strong>Size:&lt;/strong> Medium or Large ( Can be completed in either 175 or 350 hours, depending on the depth of analysis and number of models tested.)&lt;/li>
&lt;li>&lt;strong>Mentors:&lt;/strong> &lt;a href="mailto:davisje@ucsc.edu">James Davis&lt;/a>, &lt;a href="mailto:pang@soe.ucsc.edu">Alex Pang&lt;/a>&lt;/li>
&lt;/ul>
&lt;h3 id="key-research-questions">Key Research Questions&lt;/h3>
&lt;ol>
&lt;li>Do AI models perform differently based on skin tone?
&lt;ul>
&lt;li>How do classification accuracy, confidence scores, and error rates change when skin tone is altered systematically?&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>What are the underlying causes of bias?
&lt;ul>
&lt;li>Is bias solely dependent on skin tone, or do other skin-related properties (e.g., texture, reflectance) contribute to model predictions?&lt;/li>
&lt;li>Is bias driven by dataset imbalances (e.g., underrepresentation of certain skin tones)?&lt;/li>
&lt;li>Do facial features beyond skin tone (e.g., structure, expression, pose) contribute to biased predictions?&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Are bias trends reproducible?
&lt;ul>
&lt;li>Can we replicate bias patterns across different datasets, model architectures, and experimental setups?&lt;/li>
&lt;li>How consistent are the findings when varying image sources and preprocessing methods?&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ol>
&lt;h3 id="specific-tasks">Specific Tasks:&lt;/h3>
&lt;ol>
&lt;li>Dataset Selection &amp;amp; Preprocessing
&lt;ul>
&lt;li>Choose appropriate face/human datasets (e.g., FairFace, CelebA, COCO-Human).&lt;/li>
&lt;li>Preprocess images to ensure consistent lighting, pose, and resolution before applying transformations.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Skin Tone Manipulation with BioSkin
&lt;ul>
&lt;li>Systematically modify melanin levels while keeping facial features unchanged.&lt;/li>
&lt;li>Generate multiple variations per image (lighter to darker skin tones).&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Model Evaluation &amp;amp; Bias Analysis
&lt;ul>
&lt;li>Test face classification models (e.g., ResNet, FaceNet) and vision-language models (e.g., BLIP, LLaVA) on the modified images.&lt;/li>
&lt;li>Compute fairness metrics (e.g., demographic parity, equalized odds).&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Investigate Underlying Causes of Bias
&lt;ul>
&lt;li>Compare model behavior across different feature sets.&lt;/li>
&lt;li>Test whether bias persists across multiple datasets and model architectures.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;li>Ensure Reproducibility
&lt;ul>
&lt;li>Develop an open-source pipeline for others to replicate bias evaluations.&lt;/li>
&lt;li>Provide codebase and detailed documentation for reproducibility.&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ol></description></item><item><title>ReasonWorld</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/ucsc/reason-world/</link><pubDate>Fri, 28 Feb 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre25/ucsc/reason-world/</guid><description>&lt;h3 id="reasonworld-real-world-reasoning-with-a-long-term-world-model">ReasonWorld: Real-World Reasoning with a Long-Term World Model&lt;/h3>
&lt;p>A world model is essentially an internal representation of an environment that an AI system would construct based on external information to plan, reason, and interpret its surroundings. It stores the system’s understanding of relevant objects, spatial relationships, and/or states in the environment. Recent augmented reality (AR) and wearable technologies like Meta Aria glasses provide an opportunity to gather rich information from the real world in the form of vision, audio, and spatial data. Along with this, large language (LLM), vision language models (VLMs), and general machine learning algorithms have enabled nuanced understanding and processing of multimodal inputs that can label, summarize, and analyze experiences.&lt;/p>
&lt;p>With &lt;strong>ReasonWorld&lt;/strong>, we aim to utilize these technologies to enable advanced reasoning about important objects/events/spaces in real-world environments in a structured manner. With the help of wearable AR technology, the system would be able to capture real-world multimodal data. We aim to utilize this information to create a long-memory modeling toolkit that would support features like:&lt;/p>
&lt;ul>
&lt;li>Longitudinal and structured data logging: Capture and storing of multimodal data (image, video, audio, location coordinates etc.)&lt;/li>
&lt;li>Semantic summarization: Automatic scene labeling via LLMs/VLMs to identify key elements in the surroundings&lt;/li>
&lt;li>Efficient retrieval: For querying and revisiting past experiences and answering questions like “Where have I seen this painting before?”&lt;/li>
&lt;li>Adaptability: Continuously refining and understanding the environment and/or relationships between objects/locations.&lt;/li>
&lt;li>Adaptive memory prioritization: Where the pipeline can assess the contextual significance of the captured data and retrieve those that are the most significant. The model retains meaningful, structured representations rather than raw, unfiltered data.&lt;/li>
&lt;/ul>
&lt;p>This real-world reasoning framework with a long-term world model can function as a structured search engine for important objects and spaces, enabling:&lt;/p>
&lt;ul>
&lt;li>Recognizing and tracking significant objects, locations, and events&lt;/li>
&lt;li>Supporting spatial understanding and contextual analysis&lt;/li>
&lt;li>Facilitating structured documentation of environments and changes over time&lt;/li>
&lt;/ul>
&lt;h3 id="alignment-with-summer-of-reproducibility">Alignment with Summer of Reproducibility:&lt;/h3>
&lt;ul>
&lt;li>Core pipeline for AR data ingestion, event segmentation, summarization, and indexing (knowledge graph or vector database) would be made open-source.&lt;/li>
&lt;li>Clear documentation of each module and how they collaborate with one another&lt;/li>
&lt;li>The project could be tested with standardized datasets, simulated environments as well as controlled real-world scenarios, promoting reproducibility&lt;/li>
&lt;li>Opportunities for Innovation - A transparent, modular approach invites a broad community to propose novel expansions&lt;/li>
&lt;/ul>
&lt;h3 id="specific-tasks">Specific Tasks:&lt;/h3>
&lt;ul>
&lt;li>A pipeline for real-time/batch ingestion of data with the wearable AR device and cleaning&lt;/li>
&lt;li>Have an event segmentation module to classify whether the current object/event is contextually significant, filtering out the less relevant observations.&lt;/li>
&lt;li>Have VLMs/LLMs summarize the events with the vision/audio/location data to be stored and retrieved later by structured data structures like knowledge graph, vector databases etc.&lt;/li>
&lt;li>Storage optimization with prioritizing important objects and spaces, optimizing storage based on contextual significance and frequency of access.&lt;/li>
&lt;li>Implement key information retrieval mechanisms&lt;/li>
&lt;li>Ensure reproducibility by providing datasets and scripts&lt;/li>
&lt;/ul>
&lt;h3 id="reasonworld">ReasonWorld&lt;/h3>
&lt;ul>
&lt;li>&lt;strong>Topics:&lt;/strong> &lt;code>Augmented reality&lt;/code> &lt;code>Multimodal learning&lt;/code> &lt;code>Computer vision for AR&lt;/code> &lt;code>LLM/VLM&lt;/code> &lt;code>Efficient data indexing&lt;/code>&lt;/li>
&lt;li>&lt;strong>Skills:&lt;/strong> Machine Learning and AI, Augmented Reality and Hardware integration, Data Engineering &amp;amp; Storage Optimization&lt;/li>
&lt;li>&lt;strong>Difficulty:&lt;/strong> Hard&lt;/li>
&lt;li>&lt;strong>Size:&lt;/strong> Large (350 hours)&lt;/li>
&lt;li>&lt;strong>Mentors:&lt;/strong> &lt;a href="mailto:davisje@ucsc.edu">James Davis&lt;/a>, &lt;a href="mailto:pang@soe.ucsc.edu">Alex Pang&lt;/a>&lt;/li>
&lt;/ul></description></item></channel></rss>