MORE | Fall 2025

Targeted Removal of Unwanted Biases in Probabilistic Circuits

Data icon, disabled. Four grey bars arranged like a vertical bar chart.

Training datasets in Machine Learning (ML) frequently include unwanted biases, which can skew decisions and enable recovery of sensitive data from model outputs. These biases exert greater influence on decisions when other inputs are missing. This issue threatens the integrity of models in fields such as healthcare, which rely on sensitive personal data. The researchers address this issue by using an existing scoring metric that operates under partial inputs to find and penalize these biases during training. In deployed ML models, this framework can be used to protect sensitive data or to enforce domain-specific constraints.

Student researcher

Marko Jojic

Computer science

Hometown: Bellevue, Washington, United States

Graduation date: Spring 2026