Hazelfire / innovation_in_ceas_ala_GW

Innovations in GiveWell-esque CEAs ... monte-carlo, user input, transparency, checking

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

description
’improve GiveWell’s tools/analysis” (to permit Montecarlo, transparency, and allow user input of moral and epistemic parameters)

Cost-Effect-Anal./"GiveWell"+: Explicit uncertainty, transparency, customizability

This workspace

Started by David Reinstein, Rethink Priorities. See who-is-involved.md for other contributors and discussants. The project as a whole is a joint effort with Sam Nolan and others.

{% hint style="info" %} Update 27 May 202**:** Sam Nolan has made strong progress on the GiveDirectly model in a Squiggle notebook HERE, which he reports in the EA Forum Post HERE. {% endhint %}

Introduction

This gitbook gives the motivation for working on tooling to improve CEAs. It also attempts to lay out the current state of the research in this area... so if you would be interested in contributing, you shouldn't have too much difficulty finding suitable work to do.

Why tooling for CEAs?

Quantifying impact is a cornerstone of Effective Altruism, and currently GiveWell is what is considered a gold standard in the EA community. However, GiveWell's analyses suffer from some limitations which we believe are important for the EA community to reconcile.

  1. Perhaps the most critical issue with GiveWell's analysis is that the analysis does not formally consider uncertainty. Representing uncertainty is important among EAs, particularly when it comes to determining the value of research, epistemics and forecasting (For an EA org in this space, see QURI). More practically, representing uncertainty may will help donors and policymakers consider the 'risk versus return' of each intervention, and consider how confident they should be in the evaluations. (See also #why-make-uncertainty-explicit)
  2. A second issue is that the model may have bugs in it, may input wrongly coded data, or may have internal inconsistencies. Making the computations more explicit and transparent could facilitate checking and the use of tools to improve reliability and reduce errors. See Pedant for an EA project in this space focusing on 'type checking'.
  3. The third issue is that currently the Cost Effectiveness Analysis (CEAs) provided by organizations such as GiveWell are very daunting and confusing to understand. The underlying model can be hard to tease out from a collection of cell references and formulas. Improving the way that users understand and interact with these models could improve accessibility to EAs and to the research community.

That being said, GiveWell is already a positive outlier in terms of the quality of CEAs. Our project also aims to create tools and examples that help create CEAs for non-GiveWell organizations, particularly longtermist ones, which have often escaped more rigorous analysis of Cost Effectiveness up to now.

Specifically, this project investigates the efficacy of the following innovations:

  • Representing CEAs using code (See code-representations-of-gw-models.md and pedant.md)
  • Using alternatives to spreadsheets (MS Excel etc) for representing CEAs, particularly those that can handle uncertainty (See @Guesstimate and @Causal.app)
  • Presenting these in 'visual dashboard/BI' ways that 'make the uncertainty clear' and enable intuitive comparisons \

We further discuss this case, and responses, under:

key-writings-and-resources.md

limitations-of-givewell

... and in other sections below.

_Semi-aside:_I think this would be really good both for directly considering the most valuable global health interventions, and for building the sophistication and epistemics of EA and policymaking communities. It is a really good example to illustrate Fermi Monte Carlo estimation, as well as making concrete, and opening up some very interesting and meaningful considerations of what to measure and value. \

\

About

Innovations in GiveWell-esque CEAs ... monte-carlo, user input, transparency, checking