Computational Stratification: Data-Intensive Social Science for Inequality & Policy's repositories
web_scraping
Code and data for the research team scraping charter websites using scrapy, requests, Selenium, and wget with Python, shell, and Docker. This is the foundation of analyses into charter schools' linguistic strategies and social implications.
scrape_obituaries
Code for scraping obituaries from Legacy.com. 3 steps: scrape URLs & paragraphs then extract age, sex, & race
scraping_server
code for universal web-crawling UI
text_analysis
Code and data for research team that does text analysis: word counts, word embeddings, topic models, parsing HTML, unsupervised clustering, etc.
web-crawling-ic2s2-2022
An introduction to web-crawling/scraping for beginners with some Python know-how. Created for IC2S2 Summer 2022 by Jaren Haber, PhD
data_management
Code for managing large data sets in Python, usually with Pandas. These scripts mostly merge, filter, inspect, and count things. Developed for a charter school database of 10K+ units based on web-crawling and federal data sources (CCD, ACS, etc).
geospatial
Code that examines geographic patterns in charter school proliferation, size, performance, and especially ideology within race- and class-structured school districts and Census tracts. Key packages include matplotlib, folium, and geoplotlib.
sorting-schools-2020
Replication code for "Sorting Schools: A Computational Analysis of Charter School Identities and Stratification" research article by Jaren Haber, UC Berkeley. Paper investigates the relationships between charter school and school district poverty & race, on one hand, and school ideology and academic performance, on the other.
edunomics_arrays
Arrays of school level spending across student poverty/disadvantage for Edunomics Lab. DC and maybe other states/districts.
scrapy-cluster
This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster.