<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Nicole Brewer | UCSC OSPO</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/author/nicole-brewer/</link><atom:link href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/nicole-brewer/index.xml" rel="self" type="application/rss+xml"/><description>Nicole Brewer</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><item><title>From Friction to Flow: Why I'm Building Widgets for Reproducible Research</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/uchicago/jupyter-widgets/20250624-nbrewer/</link><pubDate>Tue, 24 Jun 2025 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre25/uchicago/jupyter-widgets/20250624-nbrewer/</guid><description>&lt;blockquote>
&lt;p>This summer, I’m building Jupyter Widgets to reduce friction in reproducible workflows on Chameleon. Along the way, I’m reflecting on what usability teaches us about the real meaning of reproducibility.&lt;/p>
&lt;/blockquote>
&lt;h2 id="supercomputing-competition-reproducibility-reality-check">Supercomputing Competition: Reproducibility Reality Check&lt;/h2>
&lt;p>My first reproducibility experience threw me into the deep end—trying to recreate a tsunami simulation with a GitHub repository, a scientific paper, and a lot of assumptions. I was part of a student cluster competition at the Supercomputing Conference, where one of our challenges was to reproduce the results of a prior-year paper. I assumed “reproduce” meant something like “re-run the code and get the same numbers.” But what we actually had to do was rebuild the entire computing environment from scratch—on different hardware, with different software versions, and vague documentation. I remember thinking: &lt;em>If all these conditions are so different, what are we really trying to learn by conducting reproducibility experiments?&lt;/em> That experience left me with more questions than answers, and those questions have stayed with me. In fact, they’ve become central to my PhD research.&lt;/p>
&lt;h2 id="summer-of-reproducibility-lessons-from-100-experiments-on-chameleon">Summer of Reproducibility: Lessons from 100+ Experiments on Chameleon&lt;/h2>
&lt;p>I’m currently a PhD student and research software engineer exploring questions around what computational reproducibility really means, and when and why it matters. I also participated in the &lt;strong>&lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre24/depaul/repronb/">Summer of Reproducibility 2024&lt;/a>&lt;/strong>, where I helped assess over 100 public experiments on the Chameleon platform. &lt;a href="https://doi.org/10.1109/e-Science62913.2024.10678673" target="_blank" rel="noopener">Our analysis&lt;/a> revealed key friction points—especially around usability—that don’t necessarily prevent reproducibility in the strictest sense, but introduce barriers in terms of time, effort, and clarity. These issues may not stop an expert from reproducing an experiment, but they can easily deter others from even trying. This summer’s project is about reducing that friction—some of which I experienced firsthand—by improving the interface between researchers and the infrastructure they rely on.&lt;/p>
&lt;h2 id="from-psychology-labs-to-jupyter-notebooks-usability-is-central-to-reproducibility">From Psychology Labs to Jupyter Notebooks: Usability is Central to Reproducibility&lt;/h2>
&lt;p>My thinking shifted further when I was working as a research software engineer at Purdue, supporting a psychology lab that relied on a complex statistical package. For most researchers in the lab, using the tool meant wrestling with cryptic scripts and opaque parameters. So I built a simple Jupyter-based interface to help them visualize input matrices, validate settings, and run analyses without writing code. The difference was immediate: suddenly, people could actually use the tool. It wasn’t just more convenient—it made the research process more transparent and repeatable. That experience was a turning point for me. I realized that usability isn’t a nice-to-have; it’s critical for reproducibility.&lt;/p>
&lt;h2 id="teaching-jupyter-widget-tutorials-at-scipy">Teaching Jupyter Widget Tutorials at SciPy&lt;/h2>
&lt;p>Since that first experience, I’ve leaned into building better interfaces for research workflows—especially using Jupyter Widgets. Over the past few years, I’ve developed and taught tutorials on how to turn scientific notebooks into interactive web apps, including at the &lt;strong>SciPy conference&lt;/strong> in &lt;a href="https://github.com/Jupyter4Science/scipy23-jupyter-web-app-tutorial" target="_blank" rel="noopener">2023&lt;/a> and &lt;a href="https://github.com/Jupyter4Science/scipy2024-jupyter-widgets-tutorial" target="_blank" rel="noopener">2024&lt;/a>. These tutorials go beyond the basics: I focus on building real, multi-tab applications that reflect the complexity of actual research tools. Teaching others how to do this has deepened my own knowledge of the widget ecosystem and reinforced my belief that good interfaces can dramatically reduce the effort it takes to reproduce and reuse scientific code. That’s exactly the kind of usability work I’m continuing this summer—this time by improving the interface between researchers and the Chameleon platform itself.&lt;/p>
&lt;h2 id="making-chameleon-even-more-reproducible-with-widgets">Making Chameleon Even More Reproducible with Widgets&lt;/h2>
&lt;p>This summer, I’m returning to Chameleon with a more focused goal: reducing some of the friction I encountered during last year’s reproducibility project. One of Chameleon’s standout features is its Jupyter-based interface, which already goes a long way toward making reproducibility more achievable. My work builds on that strong foundation by improving and extending interactive widgets in the &lt;strong>Python-chi&lt;/strong> library — making tasks like provisioning resources, managing leases, and tracking experiment progress on Chameleon even more intuitive. For example, instead of manually digging through IDs to find an existing lease, a widget could present your current leases in a dropdown or table, making it easier to pick up where you left off and avoid unintentionally reserving unnecessary resources. It’s a small feature, but smoothing out this kind of interaction can make the difference between someone giving up or trying again. That’s what this project is about.&lt;/p>
&lt;h2 id="looking-ahead-building-for-people-not-just-platforms">Looking Ahead: Building for People, Not Just Platforms&lt;/h2>
&lt;p>I’m excited to spend the next few weeks digging into these questions—not just about what we can build, but how small improvements in usability can ripple outward to support more reproducible, maintainable, and accessible research. Reproducibility isn’t just about rerunning code; it’s about supporting the people who do the work. I’ll be sharing updates as the project progresses, and I’m looking forward to learning (and building) along the way. I’m incredibly grateful to once again take part in this paid experience, made possible by the 2025 Open Source Research Experience team and my mentors.&lt;/p></description></item><item><title>Assessing the Computational Reproducibility of Jupyter Notebooks</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/depaul/20240618-nbrewer/</link><pubDate>Tue, 18 Jun 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/depaul/20240618-nbrewer/</guid><description>&lt;p>Like so many authors before me, my first reproducibility study and very first academic publication started with the age-old platitude, &amp;ldquo;Reproducibility is a cornerstone of the scientific method.&amp;rdquo; My team and I participated in a competition to replicate the performance improvements promised by a paper presented at last year&amp;rsquo;s Supercomputing conference. We weren&amp;rsquo;t simply re-executing the same experiment with the same cluster; instead, we were trying to confirm that we got similar results on a different cluster with an entirely different architecture. From the very beginning, I struggled to wrap my mind around the many reasons for reproducing computational experiments, their significance, and how to prioritize them. All I knew was that there seemed to be a consensus that reproducibility is important to science and that the experience left me with more questions than answers.&lt;/p>
&lt;p>Not long after that, I started a job as a research software engineer at Purdue University, where I worked heavily with Jupyter Notebooks. I used notebooks and interactive components called widgets to create a web application, which I turned into a reusable template. Our team was enthusiastic about using Jupyter Notebooks to quickly develop web applications because the tools were accessible to the laboratory researchers who ultimately needed to maintain them. I was fortunate to receive the &lt;a href="https://bssw.io/fellows/nicole-brewer" target="_blank" rel="noopener">Better Scientific Software Fellowship&lt;/a> to develop tutorials to teach others how to use notebooks to turn their scientific workflows into web apps. I collected those and other resources and established the &lt;a href="https://www.jupyter4.science" target="_blank" rel="noopener">Jupyter4Science&lt;/a> website, a knowledgebase and blog about Jupyter Notebooks in scientific contexts. That site aims to improve the accessibility of research data and software.&lt;/p>
&lt;p>There seemed to be an important relationship between improved accessibility and reuse of research code and data and computational reproducibility, but I still had trouble articulating it. In pursuit of answers, I moved to sunny Arizona to pursue a History and Philosophy of Science degree. My research falls at the confluence of my prior experiences; I&amp;rsquo;m studying the reproducibility of scientific Jupyter Notebooks. I have learned that questions about reproducibility aren&amp;rsquo;t very meaningful without considering specific aspects such as who is doing the experiment and replication, the nature of the experimental artifacts, and the context in which the experiment takes place.&lt;/p>
&lt;p>I was fortunate to have found a mentor for the Summer of Reproducibility, &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/tanu-malik/">Tanu Malik&lt;/a>, who shares the philosophy that the burden of reproducibility should not solely rest on domain researchers who must develop other expertise. She and her lab have developed &lt;a href="https://github.com/depaul-dice/Flinc" target="_blank" rel="noopener">FLINC&lt;/a>, an application virtualization tool that improves the portability of computational notebooks. Her prior work demonstrated that FLINC provides efficient reproducibility of notebooks and takes significantly less time and space to execute and repeat notebook execution than Docker containers for the same notebooks. My work will expand the scope of this original experiment to include more notebooks to FLINC&amp;rsquo;s test coverage and show robustness across even more diverse computational tasks. We expect to show that infrastructural tools like FLINC improve the success rate of automated reproducibility.&lt;/p>
&lt;p>I&amp;rsquo;m grateful to both the Summer of Reproducibility program managers and my research mentor for this incredible opportunity to further my dissertation research in the context of meaningful collaboration.&lt;/p></description></item></channel></rss>