MIT Initiative on the Digital Economy – Medium

Unmasking the Bias in Facial Recognition Algorithms | by MIT IDE | MIT Initiative on the Digital Economy | Jan, 2024


2024-01-02 23:43:13

Full article at: Source link


Machine learning models are influenced by biased data, as seen in the case of Amazon’s hiring model that discriminated against women. Data reflects societal biases, leading to power shadows in datasets, favoring lighter-skinned individuals and men. Power shadows must be acknowledged and addressed in the development of technology to avoid perpetuating existing social hierarchies.

Key Themes

1. Amazon Hiring Example: Biased data in machine learning models.
2. Skewed Data: Representation issues in datasets.
3. Colonialism and Colorism: Impact of historical injustices on data biases.

This is a staging environment
Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google