<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Triveni Gurram | UCSC OSPO</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/author/triveni-gurram/</link><atom:link href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/triveni-gurram/index.xml" rel="self" type="application/rss+xml"/><description>Triveni Gurram</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><item><title>Final Blogpost: Reproducibility in Data Visualization</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/niu/repro-vis/20240828-triveni5/</link><pubDate>Wed, 28 Aug 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/niu/repro-vis/20240828-triveni5/</guid><description>&lt;p>Hello everyone!&lt;/p>
&lt;p>I&amp;rsquo;m Triveni, a Master&amp;rsquo;s student in Computer Science at Northern Illinois University (NIU). I&amp;rsquo;m excited to share my progress on the OSRE 2024 project &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre24/niu/repro-vis/">Categorize Differences in Reproduced Visualizations&lt;/a> focusing on data visualization reproducibility. Working under the mentorship of &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/david-koop/">David Koop&lt;/a>, I&amp;rsquo;ve made some significant strides and faced some interesting challenges.&lt;/p>
&lt;h1 id="reproducibility-in-data-visualization">Reproducibility in data visualization&lt;/h1>
&lt;p>Reproducibility is crucial in data visualization, ensuring that two visualizations accurately convey the same data. This is essential for maintaining transparency and trust in data-driven decision-making. When comparing two visualizations, the challenge is not just spotting differences but determining which differences are meaningful. Tools like OpenCV are often used for image comparison, but they may detect all differences, including those that do not impact the data&amp;rsquo;s interpretation. For example, slight shifts in labels might be flagged as differences even if the underlying data remains unchanged, making it challenging to assess whether the visualizations genuinely differ in terms of the information they convey.&lt;/p>
&lt;h1 id="a-breakthrough-with-chartdetective">A Breakthrough with ChartDetective&lt;/h1>
&lt;p>Among various tools like ChartOCR and ChartReader, ChartDetective proved to be the most effective. This tool enabled me to extract data from a range of visualizations, including bar charts, line charts, box plots, and scatter plots. To enhance its capabilities, I modified the codebase to capture pixel values alongside the extracted data and store both in a CSV file. This enhancement allowed for a direct comparison of data values and their corresponding pixel coordinates between two visualizations, focusing on meaningful differences that truly impact data interpretation.&lt;/p>
&lt;h1 id="example-comparing-two-bar-plots-with-chartdetective">Example: Comparing Two Bar Plots with ChartDetective&lt;/h1>
&lt;p>Consider two bar plots that visually appear similar but have slight differences in their data values. Using ChartDetective, I extracted the data and pixel coordinates from both plots and stored this information in a CSV file. The tool then compared these values to identify any discrepancies.&lt;/p>
&lt;p>For instance, in one bar plot, the height of a specific bars were slightly increased. By comparing the CSV files generated by ChartDetective, I was able to pinpoint these differences precisely. The final step involved highlighting these differences on one of the plots using OpenCV, making it clear where visualizations diverged.This approach ensures that only meaningful differences—those that reflect changes in the data—are considered when assessing reproducibility.&lt;/p>
&lt;ul>
&lt;li>ChartDetective: SVG or PDF file of the visualization is uploaded to extract data.&lt;/li>
&lt;/ul>
&lt;p align="center">
&lt;img src="./barplot_chartdetective.png" alt="ChartDetective" style="width: 80%; height: auto;">
&lt;/p>
- Data Extraction: Data values along with pixel details are stored in the CSV files.
&lt;p align="center">
&lt;img src="./barplots_pixels.png" alt="Data_Extraction" style="width: 80%; height: auto;">
&lt;/p>
- Highlighting the differences: Differences are highlighted on one of the plots using OpenCV
&lt;p align="center">
&lt;img src="./Highlighted_differences.png" alt="Highlighting the differences" style="width: 60%; height: auto;">
&lt;/p>
&lt;h1 id="understanding-user-perspectives-on-reproducibility">Understanding User Perspectives on Reproducibility&lt;/h1>
&lt;p>To complement the technical analysis, I created a pilot survey to understand how users perceive reproducibility in data visualizations. The survey evaluates user interpretations of two visualizations and explores which visual parameters impact their decision-making. This user-centered approach is crucial because even minor differences in visual representation can significantly affect how data is interpreted and used.&lt;/p>
&lt;p>Pilot Survey Example:&lt;/p>
&lt;p>Pixel Differences: In one scenario, the height of two bars was altered slightly, introducing a noticeable yet subtle change.&lt;/p>
&lt;p>Label Swapping: In another scenario, the labels of two bars were swapped without changing their positions or heights.&lt;/p>
&lt;p align="center">
&lt;img src="./barchart_labels_swap.png" alt="Label Swapping" style="width: 80%; height: auto;">
&lt;/p>
&lt;p>Participants will be asked to evaluate the reproducibility of these visualizations, considering whether the differences impacted their interpretation of the data. The goal was to determine which visual parameters—such as bar height or label positioning—users find most critical when assessing the similarity of visualizations.&lt;/p>
&lt;h1 id="future-work-and-conclusion">Future Work and Conclusion&lt;/h1>
&lt;p>Going forward, I plan to develop a proof of concept based on these findings and implement an extensive survey to further explore the impact of visual parameters on users&amp;rsquo; perceptions of reproducibility. Understanding this will help refine tools and methods for comparing visualizations, ensuring they not only look similar but also accurately represent the same underlying data.&lt;/p></description></item><item><title>Reproducibility in Data Visualization</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/niu/repro-vis/20240719-triveni5/</link><pubDate>Fri, 19 Jul 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/niu/repro-vis/20240719-triveni5/</guid><description>&lt;p>Hello everyone!&lt;/p>
&lt;p>I&amp;rsquo;m Triveni, a Master&amp;rsquo;s student in Computer Science at Northern Illinois University (NIU). I&amp;rsquo;m excited to share my progress on the OSRE 2024 project &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre24/niu/repro-vis/">Categorize Differences in Reproduced Visualizations&lt;/a> focusing on data visualization reproducibility. Working under the mentorship of &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/david-koop/">David Koop&lt;/a>, I&amp;rsquo;ve made some significant strides and faced some interesting challenges.&lt;/p>
&lt;h2 id="initial-approach-and-challenges">Initial Approach and Challenges&lt;/h2>
&lt;p>I began my work by comparing original visualizations with reproduced ones using OpenCV for pixel-level comparison. This method helped highlight structural differences but also brought to light some challenges. Different versions of libraries rendered visualizations slightly differently, causing minor positional changes that didn&amp;rsquo;t affect the overall message but were still flagged as discrepancies.&lt;/p>
&lt;p>To address this, I experimented with machine learning models like VGG16, ResNet, and Detectron2. These models are excellent for general image recognition but fell short for our specific needs with charts and visualizations. The results were not as accurate as I had hoped, primarily because these models aren&amp;rsquo;t tailored to handle the unique characteristics of data visualizations.&lt;/p>
&lt;h2 id="shifting-focus-to-chart-specific-models">Shifting Focus to Chart-Specific Models&lt;/h2>
&lt;p>Recognizing the limitations of general ML models, I shifted my focus to chart-specific models like ChartQA, ChartOCR, and ChartReader. These models are designed to understand and summarize chart data, making them more suitable for our goal of comparing visualizations based on the information they convey.&lt;/p>
&lt;h2 id="generating-visualization-variations-and-understanding-human-perception">Generating Visualization Variations and Understanding Human Perception&lt;/h2>
&lt;p>Another exciting development in my work has been generating different versions of visualizations. This will allow me to create a survey to collect human categorization of visualizations. By understanding how people perceive differences whether it&amp;rsquo;s outliers, shapes, data points, or colors. We can gain insights into what parameters impact human interpretation of visualizations.&lt;/p>
&lt;h2 id="next-steps">Next Steps&lt;/h2>
&lt;p>Moving forward, I&amp;rsquo;ll continue to delve into chart-specific models to refine our comparison techniques. Additionally, the survey will provide valuable data on human perception, which can be used to improve our automated comparison methods. By combining these approaches, I hope to create a robust framework for reliable and reproducible data visualizations.&lt;/p>
&lt;p>I&amp;rsquo;m thrilled about the progress made so far and eager to share more updates with you all. Stay tuned for more insights and developments on this exciting journey!&lt;/p></description></item><item><title>Reproducibility in Data Visualization</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/niu/repro-vis/20240613-triveni5/</link><pubDate>Thu, 13 Jun 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/niu/repro-vis/20240613-triveni5/</guid><description>&lt;p>Hello everyone!&lt;/p>
&lt;p>I&amp;rsquo;m Triveni, a Master&amp;rsquo;s student in Computer Science at Northern Illinois University (NIU). When I came across the OSRE 2024 project &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre24/niu/repro-vis/">Categorize Differences in Reproduced Visualizations&lt;/a> focusing on data visualization reproducibility, I was excited because it aligned with my interest in data visualization. While my initial interest was in geospatial data visualization, the project&amp;rsquo;s goal of ensuring reliable visualizations across all contexts really appealed to me. So, I actively worked on understanding the project’s key concepts and submitted my proposal &lt;a href="https://drive.google.com/file/d/1R1c23oUC7noZo5NrUzuDbjwo0OqbkrAK/view" target="_blank" rel="noopener">My proposal can be viewed here&lt;/a> under mentorship of &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/david-koop/">David Koop&lt;/a> to join the project.&lt;/p>
&lt;h2 id="early-steps-and-challenges">Early Steps and Challenges:&lt;/h2>
&lt;p>I began working on the project on May 27th, three weeks ago. Setting up the local environment initially presented some challenges, but I persevered and successfully completed the setup process. The past few weeks have been spent exploring the complexities of reproducibility in visualizations, particularly focusing on capturing the discrepancies that arise when using different versions of libraries to generate visualizations. Working with Dr. David Koop as my mentor has been an incredible experience. Our weekly report meetings keep me accountable and focused. While exploring different algorithms and tools to compare visualizations can be challenging at times, it&amp;rsquo;s a fantastic opportunity to learn cutting-edge technologies and refine my problem-solving skills.&lt;/p>
&lt;h2 id="looking-ahead">Looking Ahead:&lt;/h2>
&lt;p>I believe this project can make a valuable contribution to the field of reproducible data visualization. By combining automated comparison tools with a user-centric interface, we can empower researchers and data scientists to make informed decisions about the impact of visualization variations. In future blog posts, I&amp;rsquo;ll share more about the specific tools and techniques being used, and how this framework will contribute to a more reliable and trustworthy approach to data visualization reproducibility.&lt;/p>
&lt;p>Stay tuned!&lt;/p>
&lt;p>I&amp;rsquo;m excited to embark on this journey and share my progress with all of you.&lt;/p></description></item></channel></rss>