<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>python | UCSC OSPO</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/tag/python/</link><atom:link href="https://deploy-preview-1007--ucsc-ospo.netlify.app/tag/python/index.xml" rel="self" type="application/rss+xml"/><description>python</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Tue, 12 Nov 2024 00:00:00 +0000</lastBuildDate><item><title>Final Report: Deriving Realistic Performance Benchmarks for Python Interpreters</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/uutah/static-python-perf/20241113-mrigankpawagi/</link><pubDate>Tue, 12 Nov 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/uutah/static-python-perf/20241113-mrigankpawagi/</guid><description>&lt;p>Hi, I am Mrigank. As a &lt;em>Summer of Reproducibility 2024&lt;/em> fellow, I have been working on &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/uutah/static-python-perf/20240817-mrigankpawagi/">deriving realistic performance benchmarks for Python interpreters&lt;/a> with &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/ben-greenman/">Ben Greenman&lt;/a> from the University of Utah. In particular, we want to benchmark Meta&amp;rsquo;s Static Python interpreter (which is a part of their Cinder project) and compare its performance with CPython on different levels of typing. In this post, I will share updates on my work since my &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/uutah/static-python-perf/20240909-mrigankpawagi/">last update&lt;/a>. This post forms my final report for the &lt;em>Summer of Reproducibility 2024&lt;/em>.&lt;/p>
&lt;h2 id="since-last-time-typing-django-files">Since Last Time: Typing Django Files&lt;/h2>
&lt;p>Based on the profiling results from load testing a Wagtail blog site, I identified three modules in Django that were performance bottlenecks and added shallow types to them. These are available on our GitHub repository.&lt;/p>
&lt;ol>
&lt;li>&lt;a href="https://github.com/utahplt/static-python-perf/blob/main/Benchmark/django/shallow/db/backends/sqlite3/_functions.py" target="_blank" rel="noopener">&lt;code>django.db.backends.sqlite3._functions&lt;/code>&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/utahplt/static-python-perf/blob/main/Benchmark/django/shallow/utils/functional.py" target="_blank" rel="noopener">&lt;code>django.utils.functional&lt;/code>&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/utahplt/static-python-perf/blob/main/Benchmark/django/shallow/views/debug.py" target="_blank" rel="noopener">&lt;code>django.views.debug&lt;/code>&lt;/a>&lt;/li>
&lt;/ol>
&lt;p>I also wrote a &lt;a href="https://github.com/utahplt/static-python-perf/tree/main/Tool_shed/driver" target="_blank" rel="noopener">script&lt;/a> to mix untyped, shallow-typed, and advanced-typed versions of a Python module and create a series of such &lt;em>gradually typed&lt;/em> versions.&lt;/p>
&lt;h2 id="summary-of-experience-and-contributions">Summary of Experience and Contributions&lt;/h2>
&lt;ol>
&lt;li>
&lt;p>I tried to set up different versions of Zulip to make them work with Static Python. My setup scripts are available in our &lt;a href="https://github.com/utahplt/static-python-perf/tree/main/Benchmark/zulip" target="_blank" rel="noopener">repository&lt;/a>. Unfortunately, Zulip&amp;rsquo;s Zerver did not run with Static Python due to incompatibility of some Django modules. A few non-Django modules were also initially throwing errors when run with Static Python due to a &lt;a href="https://github.com/facebookincubator/cinder/issues/137" target="_blank" rel="noopener">bug in Cinder&lt;/a> – but I was able to get around with a hack (which I have described in the linked GitHub issue I opened on Cinder&amp;rsquo;s repository).&lt;/p>
&lt;/li>
&lt;li>
&lt;p>I created a &lt;em>locust-version&lt;/em> of the small Django-related benchmarks available in &lt;a href="https://github.com/python/pyperformance" target="_blank" rel="noopener">pyperperformance&lt;/a> and &lt;a href="https://github.com/facebookarchive/skybison" target="_blank" rel="noopener">skybison&lt;/a>. This helped me confirm that Django is by itself compatible with Static Python, and helped me get started with Locust. This too is available in our &lt;a href="https://github.com/utahplt/static-python-perf/tree/main/Benchmark/django_sample" target="_blank" rel="noopener">repository&lt;/a>.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>As described in the midterm report, I created a complete pipeline with Locust to simulate real-world load on a Wagtail blog site. The instructions and scripts for running these load tests as well as profiling the Django codebase are available (like everything else!) in our &lt;a href="https://github.com/utahplt/static-python-perf/tree/main/Benchmark/wagtail" target="_blank" rel="noopener">repository&lt;/a>.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>We added shallow types to the three Django modules mentioned above, and I created scripts to mix untyped, shallow-typed, and advanced-typed versions of a Python module to create a series of &lt;em>gradually typed&lt;/em> versions to be tested for performance. We found that advanced-typed code may often be structurally incompatible with shallow-typed code and are looking for a solution for this. We are tracking some examples of this in a &lt;a href="https://github.com/utahplt/static-python-perf/issues/16" target="_blank" rel="noopener">GitHub issue&lt;/a>.&lt;/p>
&lt;/li>
&lt;/ol>
&lt;h2 id="going-forward">Going Forward&lt;/h2>
&lt;p>I had a great time exploring Static Python, typing in Python, load testing, and all other aspects of this project. I was also fortunate to have a helpful mentor along with other amazing team members in the group. During this project, we hit several roadblocks like the challenges in setting up real-world applications with Static Python and the difficulty in adding &lt;em>advanced&lt;/em> types – but are managing to work around them. I will be continuing to work on this project until we have a complete set of benchmarks and a comprehensive report on the performance of Static Python.&lt;/p>
&lt;p>Our work will continue to be open-sourced and available on our &lt;a href="https://github.com/utahplt/static-python-perf" target="_blank" rel="noopener">GitHub repository&lt;/a> for anyone interested in following along or contributing.&lt;/p></description></item><item><title>Deriving Realistic Performance Benchmarks for Python Interpreters</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/uutah/static-python-perf/20240817-mrigankpawagi/</link><pubDate>Sat, 17 Aug 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/uutah/static-python-perf/20240817-mrigankpawagi/</guid><description>&lt;p>Hi, I am Mrigank. I am one of the &lt;em>Summer of Reproducibility&lt;/em> fellows for 2024, and I will be working on &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre24/uutah/static-python-perf/">deriving realistic performance benchmarks for Python interpreters&lt;/a> with &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/ben-greenman/">Ben Greenman&lt;/a> from the University of Utah.&lt;/p>
&lt;h2 id="background-and-motivation">Background and Motivation&lt;/h2>
&lt;p>Recent work by Meta on a statically typed variant of Python – Static Python – which has provided immense promise in moving towards gradually typed languages without compromising on performance due to lack of complete soundness. Lu et al.&lt;sup id="fnref:1">&lt;a href="#fn:1" class="footnote-ref" role="doc-noteref">1&lt;/a>&lt;/sup> provide an evaluation of Static Python and conclude that the enhancement in performance reported by Meta on their web servers for Instagram is reasonable and is not just the result of refactoring. In fact, the study notes that very little refactoring is typically required for converting existing Python programs to Static Python. However, this study depends on a limited model of the language and does not represent real-world software applications.&lt;/p>
&lt;p>In our project, we aim to create a realistic performance benchmark to reproduce performance improvements reported by Meta and to evaluate the performance of Static Python in real-world software applications. In addition, we will analyze partially-typed code to understand the performance implications of gradual typing in Python.&lt;/p>
&lt;h2 id="key-objectives">Key Objectives&lt;/h2>
&lt;p>We will use widely-used open-sourced applications to derive realistic performance benchmarks for evaluating Static Python. In particular, we will focus on projects that utilize the Python framework &lt;a href="https://www.djangoproject.com/" target="_blank" rel="noopener">Django&lt;/a>, which is also known to power the backend of Instagram. We plan to begin with &lt;a href="https://github.com/wagtail/wagtail" target="_blank" rel="noopener">Wagtail&lt;/a>, a popular CMS built on Django. We have also identified other potential projects like &lt;a href="https://github.com/zulip/zulip" target="_blank" rel="noopener">Zulip&lt;/a>, &lt;a href="https://github.com/makeplane/plane" target="_blank" rel="noopener">Plane&lt;/a> and &lt;a href="https://github.com/LibrePhotos/librephotos" target="_blank" rel="noopener">LibrePhotos&lt;/a>. These are all actively maintained projects with significantly large codebases.&lt;/p>
&lt;p>Further, we will analyze the performance of partially-typed code. This will be of value to the Python community as it will provide confidence in gradually moving towards Static Python for improving performance. We will make our benchmarks publicly available for the community to use, reproduce, and extend.&lt;/p>
&lt;h2 id="methodology">Methodology&lt;/h2>
&lt;h3 id="load-testing">Load Testing&lt;/h3>
&lt;p>For each project that we derive benchmarks from, we will design user pipelines that simulate real-world usage and implement them to create load tests using the open-sourced &lt;a href="https://github.com/locustio/locust" target="_blank" rel="noopener">Locust&lt;/a> framework. This will allow us to evaluate the performance of Static Python in real-world loads and scenarios. Locust can spawn thousands of users, each of which independently bombards the system with HTTP requests for a range of tasks that are defined in their user pipeline. We will host each project on a server (local or cloud) to run these load tests.&lt;/p>
&lt;p>We will profile each project to ensure that our tests cover different parts of the codebase and to identify performance bottlenecks. We can then focus on these bottlenecks while gradually typing the codebase.&lt;/p>
&lt;h3 id="gradual-typing">Gradual Typing&lt;/h3>
&lt;p>For typing the code in these projects, we will create two versions of each project: one with the so-called &amp;ldquo;shallow&amp;rdquo; type annotations and another with &amp;ldquo;advanced&amp;rdquo; type annotations. The former is relatively easier to implement and we can use tools like &lt;a href="https://github.com/Instagram/MonkeyType" target="_blank" rel="noopener">MonkeyType&lt;/a> to generate stubs that can be quickly verified manually. The latter is quite non-trivial and will require manual effort. We will then mix-and-match the three versions of each project to create different combinations of typed and untyped code. Note that this mix-and-match can be done at both the module level and also at the function or class level.&lt;/p>
&lt;h2 id="conclusion">Conclusion&lt;/h2>
&lt;p>This is my first time working on performance-benchmarking and I am excited to pick up new skills in the process. I am also looking forward to interacting with people from the Python community, people from Meta&amp;rsquo;s Static Python team, and also with the maintainers of the projects we will be working on. I will be posting more updates on this project as we make progress. Stay tuned!&lt;/p>
&lt;div class="footnotes" role="doc-endnotes">
&lt;hr>
&lt;ol>
&lt;li id="fn:1">
&lt;p>Kuang-Chen Lu, Ben Greenman, Carl Meyer, Dino Viehland, Aniket Panse, and Shriram Krishnamurthi. Gradual soundness: Lessons from static python. &lt;em>The Art, Science, and Engineering of Programming&lt;/em>.&amp;#160;&lt;a href="#fnref:1" class="footnote-backref" role="doc-backlink">&amp;#x21a9;&amp;#xfe0e;&lt;/a>&lt;/p>
&lt;/li>
&lt;/ol>
&lt;/div></description></item><item><title>Midterm Report: Deriving Realistic Performance Benchmarks for Python Interpreters</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/uutah/static-python-perf/20240909-mrigankpawagi/</link><pubDate>Sat, 17 Aug 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/uutah/static-python-perf/20240909-mrigankpawagi/</guid><description>&lt;p>Hi, I am Mrigank. As a &lt;em>Summer of Reproducibility 2024&lt;/em> fellow, I am working on &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/report/osre24/uutah/static-python-perf/20240817-mrigankpawagi/">deriving realistic performance benchmarks for Python interpreters&lt;/a> with &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/ben-greenman/">Ben Greenman&lt;/a> from the University of Utah. In this post, I will provide an update on the progress we have made so far.&lt;/p>
&lt;h2 id="creating-a-performance-benchmark">Creating a Performance Benchmark&lt;/h2>
&lt;p>We are currently focusing on applications built on top of Django, a widely used Python web framework. For our first benchmark, we chose &lt;a href="https://github.com/wagtail/wagtail" target="_blank" rel="noopener">Wagtail&lt;/a>, a popular content management system. We created a pipeline with locust to simulate real-world load on the application. All of our work is open-sourced and available on our &lt;a href="https://github.com/utahplt/static-python-perf/blob/main/Benchmark/wagtail/locustfile.py" target="_blank" rel="noopener">GitHub repository&lt;/a>.&lt;/p>
&lt;p>This load-testing pipeline creates hundreds of users who independently create many blog posts on a Wagtail blog site. At the same time, thousands of users are spawned to view these blog posts. Wagtail does not have a built-in API and so it took some initial effort to figure out the endpoints to hit, which I did by inspecting the network logs in the browser while interacting with the Wagtail admin interface.&lt;/p>
&lt;p>A snapshot from a run of the load test with Locust is shown in the featured image above. This snapshot was generated by spawning users from 24 different parallel locust processes. This was done on a local server, and we plan to perform the same experiments on CloudLab soon.&lt;/p>
&lt;h2 id="profiling">Profiling&lt;/h2>
&lt;p>On running the load tests with a profiler, we found that the bottlenecks in the performance arose not from the Wagtail codebase but from the Django codebase. In particular, we identified three modules in Django that consumed the most time during the load tests: &lt;code>django.db.backends.sqlite3._functions&lt;/code>, &lt;code>django.utils.functional&lt;/code>, and &lt;code>django.views.debug&lt;/code>. &lt;a href="https://github.com/dibrinsofor" target="_blank" rel="noopener">Dibri&lt;/a>, a graduate student in Ben&amp;rsquo;s lab, is helping us add types to these modules.&lt;/p>
&lt;h2 id="next-steps">Next Steps&lt;/h2>
&lt;p>Based on these findings, we are now working on typing these modules to see if we can improve the performance of the application by using Static Python. Typing Django is a non-trivial task, and while there have been some efforts to do so, previous attempts like &lt;a href="https://github.com/typeddjango/django-stubs" target="_blank" rel="noopener">django-stubs&lt;/a> are incomplete for our purpose.&lt;/p>
&lt;p>We are also writing scripts to mix untyped, shallow-typed, and advanced-typed versions of a Python file, and run each mixed version several times to obtain a narrow confidence interval for the performance of each version.&lt;/p>
&lt;p>We will be posting more updates as we make progress. Thank you for reading!&lt;/p></description></item><item><title>SLICES/pos: Reproducible Experiment Workflows</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre24/tum/slices/</link><pubDate>Sat, 06 Jan 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre24/tum/slices/</guid><description>&lt;p>&lt;a href="https://www.slices-ri.eu/" target="_blank" rel="noopener">SLICES-RI&lt;/a> is a european research initiative aiming to create a digital research infrastructure providing an experimental platform for the upcoming decades.
One of the main goals of this initiative is the creation of fully reproducible experiments.
The SLICES research infrastructure will consist of different experiment sites focusing on different research domains such as AI experiments, Cloud and HPC-driven experiments, or investigations on wireless networks.&lt;/p>
&lt;p>To achieve reproducibility, the research group on network architectures and services of the Technical University of Munich develops the &lt;a href="https://dl.acm.org/doi/10.1145/3485983.3494841" target="_blank" rel="noopener">SLICES plain orchestrating service (SLICES/pos)&lt;/a>.
This framework supports a fully automated structured experiment workflow.
The structure of this workflow acts as a template for the design of experiments.
Users that adhere to this template will create inherently reproducible experiments, a feature we call reproducible-by-design.&lt;/p>
&lt;p>The SLICES/pos framework currently exists in two versions:
(1) A fully-managed pos deployment, that uses the SLICES/pos framework to manage the entire testbed and (2) a hosted SLICES/pos deployment.
The hosted SLICES/pos deployment is a temporary deployment that runs inside existing testbeds such as &lt;a href="https://chameleoncloud.org/" target="_blank" rel="noopener">Chameleon&lt;/a> or &lt;a href="https://cloudlab.us/" target="_blank" rel="noopener">CloudLab&lt;/a>.&lt;/p>
&lt;p>&lt;strong>Additional Information:&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://dl.acm.org/doi/10.1145/3485983.3494841" target="_blank" rel="noopener">plain orchestrating service&lt;/a>&lt;/li>
&lt;/ul>
&lt;h3 id="support-additional-programming-languages">Support Additional Programming Languages&lt;/h3>
&lt;ul>
&lt;li>Topics: &lt;code>reproducibility&lt;/code>, &lt;code>statistics&lt;/code>&lt;/li>
&lt;li>Skills: Python&lt;/li>
&lt;li>Difficulty: Medium&lt;/li>
&lt;li>Size: Large (350 hours)&lt;/li>
&lt;li>Mentor: &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/sebastian-gallenmuller/">Sebastian Gallenmüller&lt;/a>, &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/georg-carle/">Georg Carle&lt;/a>, and &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/kate-keahey/">Kate Keahey&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>Design a set of basic examples that demonstrate the usage of pos that can be executed on the SLICES/pos testbed in Munich and the Chameleon testbed.
This set of basic examples acts as a demonstration of pos&amp;rsquo; capabilities and as a tutorial for new users.
Based on these introductory examples, a more complex experiment shall be designed and executed, demonstrating the portability of the experiments between testbeds.
This experiment involves the entire experiment workflow consisting of the setup and configuration of the testbed infrastructure, the collection of measurement results, and finally, their evaluation and publication.
Multiple results of this experiment shall be created on different testbeds and hardware configurations.
The results of the experiments will differ depending on the different hardware platforms on which the experiment was executed.
These results shall be evaluated and analyzed to find a common connection between the different result sets of the experiments.&lt;/p>
&lt;ul>
&lt;li>Create introductory examples demonstrating the usage of pos&lt;/li>
&lt;li>Design and create a portable complex network experiment based on SLICES/pos&lt;/li>
&lt;li>Execute the experiment on different testbeds (Chameleon, SLICES/pos testbed)&lt;/li>
&lt;li>Analysis of reproduced experiment&lt;/li>
&lt;li>Automated analysis of experimental results&lt;/li>
&lt;li>Deduction of a model describing the fundamental connections between different experiment executions&lt;/li>
&lt;/ul></description></item><item><title>Static Python Perf: Measuring the Cost of Sound Gradual Types</title><link>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre24/uutah/static-python-perf/</link><pubDate>Sat, 06 Jan 2024 00:00:00 +0000</pubDate><guid>https://deploy-preview-1007--ucsc-ospo.netlify.app/project/osre24/uutah/static-python-perf/</guid><description>&lt;p>Gradual typing is a solution to the longstanding tension between typed and
untyped languages: let programmers write code in any flexible language (such
as Python), equip the language with a suitable type system that can describe
invariants in part of a program, and use run-time checks to ensure soundness.&lt;/p>
&lt;p>For now, though, the cost of run-time checks can be enormous.
Order-of-magnitude slowdowns are common. This high cost is a main reason why
TypeScript is unsound by design &amp;mdash; its types are not trustworthy in order
to avoid run-time costs.&lt;/p>
&lt;p>Recently, a team at Meta built a gradually-typed variant of Python called
(&lt;em>drumroll&lt;/em>) Static Python. They report an incredible 4% increase in CPU
efficiency at Instagram thanks to the sound types in Static Python. This
kind of speedup is unprecedented.&lt;/p>
&lt;p>Other languages may want to follow the Static Python approach to gradual types,
but there are big reasons to doubt the Instagram numbers:&lt;/p>
&lt;ul>
&lt;li>the experiment code is closed source, and&lt;/li>
&lt;li>the experiment itself is not easily reproducible (even for Instagram!).&lt;/li>
&lt;/ul>
&lt;p>Static Python needs a rigorous, reproducible performance evaluation to test
whether it is indeed a fundamental advance for gradual typing.&lt;/p>
&lt;p>Related Work:&lt;/p>
&lt;ul>
&lt;li>Gradual Soundness: Lessons from Static Python
&lt;a href="https://programming-journal.org/2023/7/2/" target="_blank" rel="noopener">https://programming-journal.org/2023/7/2/&lt;/a>&lt;/li>
&lt;li>Producing Wrong Data Without Doing Anything Obviously Wrong!
&lt;a href="https://users.cs.northwestern.edu/~robby/courses/322-2013-spring/mytkowicz-wrong-data.pdf" target="_blank" rel="noopener">https://users.cs.northwestern.edu/~robby/courses/322-2013-spring/mytkowicz-wrong-data.pdf&lt;/a>&lt;/li>
&lt;li>On the Cost of Type-Tag Soundness
&lt;a href="https://users.cs.utah.edu/~blg/resources/pdf/gm-pepm-2018.pdf" target="_blank" rel="noopener">https://users.cs.utah.edu/~blg/resources/pdf/gm-pepm-2018.pdf&lt;/a>&lt;/li>
&lt;/ul>
&lt;h3 id="design-and-run-an-experiment">Design and Run an Experiment&lt;/h3>
&lt;ul>
&lt;li>Topics: &lt;code>performance&lt;/code>, &lt;code>cluster computing&lt;/code>, &lt;code>statistics&lt;/code>&lt;/li>
&lt;li>Skills: Python AST parsing, program generation, scripting, measuring performance&lt;/li>
&lt;li>Difficulty: Medium&lt;/li>
&lt;li>Size: Medium (175 hours)&lt;/li>
&lt;li>Mentor: &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/ben-greenman/">Ben Greenman&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>Design an experiment that covers the space of gradually-typed Static Python programs
in a fair way. Since every variable in a program can have up to 3 different types,
there are easily 3^20 possibilities in small programs &amp;mdash; far too many to measure
exhaustively.&lt;/p>
&lt;p>Run the experiment on an existing set of benchmarks using a cluster such as CloudLab.
Manage the cluster machines across potentially dozens of reservations and combine
the results into one comprehensive view of Static Python performance.&lt;/p>
&lt;h3 id="derive-benchmarks-from-python-applications">Derive Benchmarks from Python Applications&lt;/h3>
&lt;ul>
&lt;li>Topics: &lt;code>types&lt;/code>, &lt;code>optimization&lt;/code>, &lt;code>benchmark design&lt;/code>&lt;/li>
&lt;li>Skills: Python&lt;/li>
&lt;li>Difficulty: Medium&lt;/li>
&lt;li>Size: Small to Large&lt;/li>
&lt;li>Mentor: &lt;a href="https://deploy-preview-1007--ucsc-ospo.netlify.app/author/ben-greenman/">Ben Greenman&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>Build or find realistic Python applications, equip them with rich types,
and modify them to run a meaningful performance benchmark. Running a benchmark
should produce timing information, and the timing should not be significantly
influenced by random variables, I/O actions, or system events.&lt;/p></description></item></channel></rss>