You are using a web browser we do not support. To improve your experience please try one of the following options:

2019

Student Posters

Methods:
Erin Rossiter (Washington University in St. Louis)
“Measuring Agenda-Setting Power in Political Discourse”
Selection Committee: Justin Esarey (chair), Ines Levin, Chris Lucas
Citation: Erin's project extends topic models for text analysis to measure the extent to which speakers are able to control the agenda of a political debate or conversation by shifting the topic. Erin provides careful validation of the model, along with benchmarks against alternative methods, and demonstrates the model in applications to deliberative speech and election debates. In sum, the committee appreciated this poster's combination of methodological innovation, substantive applicability, clear presentation, and well-thought out agenda for improvement and extension to future work.

Applications:
Kelsey Shoub (University of North Carolina, Chapel Hill)
“How Changing Frame Sets Alters Legislative Outcomes in Congress”
Selection Committee: Mark Pickup, Alex Tahk (chair), Michelle Torres
Citation: Shoub’s research project uses supervised and unsupervised machine learning applied to the Congressional Record to test whether changes in framing affect policy outcomes in Congress. For bills introduced repeatedly in Congress, Shoub demonstrates that changes in framing, as measured using dynamic topic models, increases the probability of passage in each chamber. The project represents an exemplary use of current text analysis methods to make valuable substantive contributions.

Faculty Poster

Erin Hartman (UCLA)
"Equivalence Based Falsification Tests for Regression Discontinuity Designs"
Selection Committee: Curt Signorino (chair), Dan Hopkins, and Kim Twist
Citation: In her poster, Hartman notes that RDD's often suffer from low power, which can be conflated with evidence in favor of the appropriateness of an RDD. She argues that the usual hypothesis test for the appropriateness of a regression discontinuity design should be flipped, with the null hypothesis being that the data is inconsistent with the design --- forcing the researcher to provide evidence that an RDD is warranted. Hartman shows how to conduct the hypothesis tests under different assumptions, demonstrates them with a monte carlo analysis, and applies them to a replication of RDD studies. Her argument is intuitive and clearly presented. The statistical analysis is rigorous. And her empirical results suggest that RDD was not valid for some of the replicated studies.

The SPM Poster Award