OK; so does this mean reinforcement learning is inapplicable to such contexts?I was referring only to reinforcement learning.
The examples you provided are of unsupervised learning where data is used to train the algorithm.
I've mentioned the case of Amazon's recruitment algorithm discriminated against hiring women as the data was biased; the same problems can arise in crime predictions where the higher percentage of arrest and incarceration rates for Australian indigenous people or African Americans can lead to algorithms using racial profiling.
Predictive policing algorithms are racist. They need to be dismantled. – MIT Technology Review
Upvote
0