There has been a renaissance around Artificial Intelligence systems in recent years. However, despite the hype, only a small percentage, i.e. 13% of Machine Learning models see the light of day! Well, effectively building and deploying machine learning models is more of an art than science! ML models are indeed inherently complex, have fuzzy boundaries, and rely heavily on data distribution. But what if they are trained on biased data? Then they’ll generate highly biased decisions! As the famous saying goes by, “Garbage in, garbage out,” so if the model is trained on skewed and unfair data distribution, they are bound to produce fuzzy output! So, join me in this talk as I will share my learnings in developing effective practices to build and deploy ethical, fair and unbiased machine learning models into production.
Have you ever wondered why the increasing reliance on Machine Learning systems raises concerns about fairness and bias in their data-driven decisions? What if ML models are built on skewed data or are not designed to mitigate bias - then they can perpetuate and even amplify existing inequalities and injustices!
Since there's no one-size-fits-all approach, thus, building and deploying a fair and unbiased ML system is more of an art than a science! In this talk, firstly, we will explore the challenges involved in building and deploying fair and unbiased ML systems. Secondly, we will understand the technical debts which incur while building such systems and how to investigate them. Finally, we will learn fundamental strategies and best practices for ensuring your ML models are fair, unbiased, and ethical!