“Given that a person’s gender cannot be inferred by appearance,” reads the email, “we have decided to remove these labels to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias.” The bias that Google talks about is a result of “flawed training data,” a much-discussed topic. A flaw that results in AI algorithm making assumptions- that is anyone who doesn’t fit the algorithm of ‘man’ or ‘woman’ and will be misgendered. By labeling them as ‘person,’ Google attempts to avoid this mistake.
Frederike Kaltheuner, a tech policy fellow at Mozilla, said to Business Insider, “Anytime you automatically classify people, whether that’s their gender or their sexual orientation, you need to decide on which categories you use in the first place — and this comes with lots of assumptions. “Classifying people as male or female assumes that gender is binary. Anyone who doesn’t fit it will automatically be misclassified and misgendered. So this is about more than just bias — a person’s gender cannot be inferred by appearance. Any AI system that tried to do that will inevitably misgender people.”
Google notes this bias in its API and AI(artificial intelligence) algorithm and seeking to change this flaw: “We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.” Any more news regarding the Tag feature is yet to be heard from Google.