The Impact of Home-Sharing Self-Regulations on Crime Rates

The rise of the sharing economy has transformed traditional industries and brought about significant societal changes, prompting ongoing policy debates about regulation. Our recent research investigates the effects of platform self-regulations within the home-sharing market, particularly focusing on Airbnb.

Key Findings: Crime Rate Reduction through Self-Regulation

We analyzed the effects of policy changes that reduce the number of Airbnb listings using a difference-in-difference approach. Our findings indicate that such self-regulations lead to a reduction in overall crime rates. Notably, incidents of assault, robbery, and burglary decreased, although theft incidents saw an increase.

Neighborhood Variations: Socioeconomic Moderators

To understand how these effects vary across different neighborhoods, we employed geographically weighted regression. Our analysis revealed that socioeconomic factors like income, housing prices, and population density significantly moderate the impact of Airbnb occupancy on crime rates. This highlights the importance of local context in evaluating the outcomes of platform self-regulations.

Policy Implications and the Sharing Economy

Our research provides empirical evidence on the societal impacts of the sharing economy and the role of platform self-regulation. By showing how reducing home-sharing listings can influence crime rates, and how these effects differ across neighborhoods, our findings offer valuable insights for policymakers. These insights can help shape regulations that promote both the benefits of the sharing economy and community safety.

The full paper can be found here.

A Fresh Look at Combination Therapy: Correlated Drug Action Models

Combination therapy, the principle of treating patients with multiple drugs either simultaneously or sequentially, has long been a cornerstone in the battle against complex diseases like cancer and HIV/AIDS. The rationale is straightforward: cancer cells, for instance, might develop resistance to one drug, but the likelihood of evading multiple drugs with different mechanisms is considerably lower. This strategy, pioneered by Frei and Freirich, has become an integral part of modern oncology. However, the challenge lies in efficiently quantifying the effects of these combinations, given the vast number of possible combinations and the resources required. Our recent research introduces a novel framework—Correlated Drug Action (CDA)—to address this challenge.

The Essence of Combination Therapy Models

Combination therapies can be studied at two levels: in vitro on cells and in vivo on living organisms. In vitro research focuses on dose response at a fixed time post-drug administration (dose-space models), while in vivo research focuses on survival time at fixed doses (temporal models).

Traditionally, null models have established a baseline for expected drug combination effects. These models are essential for determining if a combination is more effective than expected.

Introducing Correlated Drug Action (CDA)

Building on the principle of Independent Drug Action (IDA), which posits that each drug in a combination acts as if the other drug were absent, we introduce the temporal Correlated Drug Action (tCDA) model. Unlike previous models with time-varying correlation coefficients, tCDA employs a non-time-varying coefficient, offering a fast and scalable solution.

The tCDA model describes the effect of a combination based on individual monotherapies and a population-specific correlation coefficient. This model is valid for generic joint distributions of survival times characterized by their Spearman correlation.

Validating tCDA with Clinical Data

We applied the tCDA model to public oncology clinical trial data involving 18 different combinations. The model effectively explained the effect of clinical combination therapies and identified combinations that could not be explained by tCDA alone. When the survival distribution of a combination is explained by tCDA, the estimated correlation parameter can reveal sub-populations that may benefit more from one monotherapy or the combination.

Extending CDA to Cell Cultures: Dose-Space CDA (dCDA)

To address the limitations of translating preclinical cell line results to clinical outcomes, we adapted IDA’s temporal-space ideas to dose-space, resulting in the dose-space CDA (dCDA) model. This model describes the effect of combinations in cell cultures in terms of the dosages required for each monotherapy to kill cells after treatment.

The dCDA model estimates the correlation between joint distribution dosages, similar to the tCDA and ORR models in patient cohorts. Using MCF7 breast cancer cell line experiments, we demonstrated dCDA’s effectiveness in assessing potential drug synergy. We introduced the Excess Over CDA (EOCDA) metric to evaluate possible synergy, allowing for non-zero correlations.

To read the full paper, click here.

A Quantum Leap in Binary Classification: Harnessing Fermi–Dirac Distributions

Binary classification stands as a cornerstone of machine learning, playing a critical role in a multitude of applications from medical diagnoses to spam filtering. However, a perennial challenge within this domain is obtaining a reliable probabilistic output indicating the likelihood of a classification being correct.

Our paper published in PNAS propose an innovative approach: mapping the probability of correct classification to the probability of fermion occupation in a quantum system, specifically using the Fermi–Dirac distribution. This novel perspective facilitates calibrated probabilistic outputs and introduces new methodologies for optimizing classification thresholds and evaluating classifier performance.

The Quantum Connection: Fermi–Dirac Distribution

At its core, the Fermi–dirac distribution describes the statistical distribution of particles over energy states in systems obeying Fermi-Dirac statistics, typically applied to fermions that adhere to the Pauli exclusion principle. In this paper, we adapt the mathematical form of this distribution to model the probability of correct classification in binary classifiers.

By leveraging this quantum analogy, we establish a framework where:

  • Optimal Decision Threshold: The threshold for class separation in binary classification is analogous to the chemical potential in a fermion system.
  • Calibrated Probabilistic Output: The Fermi–Dirac distribution allows for a calibrated probability that reflects the likelihood of correct classification.
  • AUC and Temperature: The area under the receiver operating characteristic curve (AUC) is related to the temperature of the analogous quantum system, providing insights into classifier performance variability.

The paper can be found here.

Is A + B = AB when combining multiple drugs?

Drug combinations has shown great promise in many diseases including cancer, HIV etc. Despite its empirical success, drug combination therapy is far from being theoretically understood. This is because very often efforts to discover synergistic drug combos by high throughput screening are not followed with an in depth investigation of how synergistic combinations work at a molecular level. In our recent paper, we study and try understand in depth how the molecular response of cells to each of two drugs combine when the two drugs are given in combination. Check it out here.

SUMMA: A novel Unsupervised Ensemble Learning Algorithm

Recently, our paper on a new ensemble learning algorithm has been published in the Journal of Machine Learning Research (JMLR). In the paper, we propose an unsupervised ensemble learning algorithm, which we denote as SUMMA. The aim of ensemble learning is to combine multiple predictions to come up with a more robust predictors, i.e. similar to asking a question to many experts and coming up with the wisdom of the experts. With the availability of new algorithms almost every day and the heterogeneity of the data, it might be a wise approach to base our predictions not on a single algorithm but rather on an ensemble of algorithms. SUMMA achieves this in an unsupervised way. More details can be found here .


Paper Published in Nature Scientific Reports

Recent decade brought a lot of excitement into molecular biology. With the advancements in measurement technology, we can now measure expression of thousands of genes simultaneously. This enabled us to develop biomarker sets predictive of diseases such as asthma. Although they have important prognostic value, biomarker sets are usually not very interpretable. In this work, we developed a novel algorithm, NeTFactor, that uses a computationally-inferred context-specific gene regulatory network and applies topological, statistical, and optimization methods to identify a minimal set of regulators underlying a biomarker set. The paper can be found here.




DREAM Malaria Challenge

We have recently launched a new crowdsourcing challenge: The DREAM Malaria Challenge. The goal of the Malaria DREAM Challenge is to predict artemisinin drug-resistance levels for a test set of malaria parasites by using their in vitro transcription data and a training set consisting of published in vivo and unpublished in vitro transcriptomes. More info about the challenge can be found in the recent correspondence published in nature biotechnology.