Structural Fairness and Privacy in Machine Learning Objective

This problem set examined formal fairness frameworks and privacy-preserving mechanisms to address algorithmic bias in decision-making systems.

Introduction

Summary of my explorations in algorithmic fairness, bias detection, and ethical considerations within machine learning systems, completed as part of the coursework for AI, Decision-Making, and Society at MIT CSAIL. These problem focus on applying theoretical fairness frameworks, developing practical evaluations, and implementing mitigation strategies to address societal and ethical challenges posed by AI.

bike
bike
bike

Methodology

  • Differential Privacy : Implemented a privacy-preserving mechanism on census data to mitigate risks of reconstruction attacks.
  • Fairness Trade-offs : Evaluated the impact of fairness constraints on model performance across sensitive demographic groups.
  • Structural Fairness : Analyzed the role of data representation in perpetuating systemic disparities.
bike
bike

Findings

  • Privacy Trade-offs : Adding differential privacy successfully reduced reconstruction risks but introduced marginal decreases in data utility.
  • Fairness Constraints : Enforcing fairness constraints improved outcomes for underrepresented groups but required careful balance to avoid significant accuracy trade-offs.
  • Structural Bias : Highlighted the systemic nature of biases arising from skewed data distributions.

Impact

This work underscores the importance of balancing fairness, accuracy, and privacy in AI systems. By leveraging differential privacy and fairness metrics, it provides practical solutions for mitigating algorithmic risks.

Section Details
Objective Implement fairness trade-offs and privacy-preserving mechanisms in ML systems.
Methodology - Applied differential privacy to mitigate reconstruction risks in census data.
- Evaluated fairness constraints to improve outcomes for sensitive demographic groups.
Findings - Differential privacy reduced reconstruction risks but introduced minor trade-offs in data utility.
- Fairness constraints mitigated group disparities but required balancing accuracy and fairness.
Impact Demonstrated practical solutions for achieving fairness and privacy in machine learning systems.
bike

Table : PSET 6 – Structural Fairness and Differential Privacy in ML

bike
bike

Table : PSET 6 – Structural Fairness and Differential Privacy in ML

Additional work

Section Details
Objective Implement fairness trade-offs and privacy-preserving mechanisms in ML systems.
Methodology - Applied differential privacy to mitigate reconstruction risks in census data.
- Evaluated fairness constraints to improve outcomes for sensitive demographic groups.
Findings - Differential privacy reduced reconstruction risks but introduced minor trade-offs in data utility.
- Fairness constraints mitigated group disparities but required balancing accuracy and fairness.
Impact Demonstrated practical solutions for achieving fairness and privacy in machine learning systems.

Table : Exploratory Data Analysis and Prompt Engineering

Section Details
Objective Evaluate recommender systems, content moderation, and misinformation on online platforms.
Methodology - Built K-NN-based recommendation models.
- Analyzed user engagement metrics and content patterns.
Findings - Trade-off between accuracy (high ratings) and popularity (accumulated ratings).
- Highlighted systemic biases in recommendation algorithms and misinformation propagation risks.

Impact : Showcased the societal implications of algorithmic decisions and user interaction models.

Table : Algorithmic Impact on Online Platforms

bike
bike

Conclusion

Through this and other problems like these, I applied fairness frameworks, bias detection methods, and privacy-preserving strategies to evaluate and address ethical challenges in AI systems. These works demonstrate my ability to design rigorous evaluations, identify systemic biases, and implement solutions that align with societal values. Such methodologies are essential for building AI systems that are fair, responsible, and impactful.