The Impact of Home-Sharing Self-Regulations on Crime Rates

The rise of the sharing economy has transformed traditional industries and brought about significant societal changes, prompting ongoing policy debates about regulation. Our recent research investigates the effects of platform self-regulations within the home-sharing market, particularly focusing on Airbnb.

Key Findings: Crime Rate Reduction through Self-Regulation

We analyzed the effects of policy changes that reduce the number of Airbnb listings using a difference-in-difference approach. Our findings indicate that such self-regulations lead to a reduction in overall crime rates. Notably, incidents of assault, robbery, and burglary decreased, although theft incidents saw an increase.

Neighborhood Variations: Socioeconomic Moderators

To understand how these effects vary across different neighborhoods, we employed geographically weighted regression. Our analysis revealed that socioeconomic factors like income, housing prices, and population density significantly moderate the impact of Airbnb occupancy on crime rates. This highlights the importance of local context in evaluating the outcomes of platform self-regulations.

Policy Implications and the Sharing Economy

Our research provides empirical evidence on the societal impacts of the sharing economy and the role of platform self-regulation. By showing how reducing home-sharing listings can influence crime rates, and how these effects differ across neighborhoods, our findings offer valuable insights for policymakers. These insights can help shape regulations that promote both the benefits of the sharing economy and community safety.

The full paper can be found here.

A Fresh Look at Combination Therapy: Correlated Drug Action Models

Combination therapy, the principle of treating patients with multiple drugs either simultaneously or sequentially, has long been a cornerstone in the battle against complex diseases like cancer and HIV/AIDS. The rationale is straightforward: cancer cells, for instance, might develop resistance to one drug, but the likelihood of evading multiple drugs with different mechanisms is considerably lower. This strategy, pioneered by Frei and Freirich, has become an integral part of modern oncology. However, the challenge lies in efficiently quantifying the effects of these combinations, given the vast number of possible combinations and the resources required. Our recent research introduces a novel framework—Correlated Drug Action (CDA)—to address this challenge.

The Essence of Combination Therapy Models

Combination therapies can be studied at two levels: in vitro on cells and in vivo on living organisms. In vitro research focuses on dose response at a fixed time post-drug administration (dose-space models), while in vivo research focuses on survival time at fixed doses (temporal models).

Traditionally, null models have established a baseline for expected drug combination effects. These models are essential for determining if a combination is more effective than expected.

Introducing Correlated Drug Action (CDA)

Building on the principle of Independent Drug Action (IDA), which posits that each drug in a combination acts as if the other drug were absent, we introduce the temporal Correlated Drug Action (tCDA) model. Unlike previous models with time-varying correlation coefficients, tCDA employs a non-time-varying coefficient, offering a fast and scalable solution.

The tCDA model describes the effect of a combination based on individual monotherapies and a population-specific correlation coefficient. This model is valid for generic joint distributions of survival times characterized by their Spearman correlation.

Validating tCDA with Clinical Data

We applied the tCDA model to public oncology clinical trial data involving 18 different combinations. The model effectively explained the effect of clinical combination therapies and identified combinations that could not be explained by tCDA alone. When the survival distribution of a combination is explained by tCDA, the estimated correlation parameter can reveal sub-populations that may benefit more from one monotherapy or the combination.

Extending CDA to Cell Cultures: Dose-Space CDA (dCDA)

To address the limitations of translating preclinical cell line results to clinical outcomes, we adapted IDA’s temporal-space ideas to dose-space, resulting in the dose-space CDA (dCDA) model. This model describes the effect of combinations in cell cultures in terms of the dosages required for each monotherapy to kill cells after treatment.

The dCDA model estimates the correlation between joint distribution dosages, similar to the tCDA and ORR models in patient cohorts. Using MCF7 breast cancer cell line experiments, we demonstrated dCDA’s effectiveness in assessing potential drug synergy. We introduced the Excess Over CDA (EOCDA) metric to evaluate possible synergy, allowing for non-zero correlations.

To read the full paper, click here.

A Quantum Leap in Binary Classification: Harnessing Fermi–Dirac Distributions

Binary classification stands as a cornerstone of machine learning, playing a critical role in a multitude of applications from medical diagnoses to spam filtering. However, a perennial challenge within this domain is obtaining a reliable probabilistic output indicating the likelihood of a classification being correct.

Our paper published in PNAS propose an innovative approach: mapping the probability of correct classification to the probability of fermion occupation in a quantum system, specifically using the Fermi–Dirac distribution. This novel perspective facilitates calibrated probabilistic outputs and introduces new methodologies for optimizing classification thresholds and evaluating classifier performance.

The Quantum Connection: Fermi–Dirac Distribution

At its core, the Fermi–dirac distribution describes the statistical distribution of particles over energy states in systems obeying Fermi-Dirac statistics, typically applied to fermions that adhere to the Pauli exclusion principle. In this paper, we adapt the mathematical form of this distribution to model the probability of correct classification in binary classifiers.

By leveraging this quantum analogy, we establish a framework where:

  • Optimal Decision Threshold: The threshold for class separation in binary classification is analogous to the chemical potential in a fermion system.
  • Calibrated Probabilistic Output: The Fermi–Dirac distribution allows for a calibrated probability that reflects the likelihood of correct classification.
  • AUC and Temperature: The area under the receiver operating characteristic curve (AUC) is related to the temperature of the analogous quantum system, providing insights into classifier performance variability.

The paper can be found here.