Teaching AI to make tough calls—without exposing your values

Teaching AI to make tough calls—without exposing your values

Self-driving cars may face no-win crash scenarios. To build fairer AI, researchers often ask the public which option they prefer, then aggregate the answers. But your moral preferences are deeply personal—and sharing them can risk your privacy.

This study introduces a first privacy-preserving, crowd-guided way to train AI for ethical dilemmas using differential privacy (a proven method to hide individual data in statistics).

How it works

  • Centralized protection (VLCP, RLCP): A trusted aggregator adds carefully tuned noise to the group average, shielding either each voter or each record.
  • Distributed protection (VLDP, RLDP): Each person privately perturbs their own response with a personalized privacy setting before sending it.

Across synthetic and real datasets, the approach kept decision accuracy high while protecting individuals’ moral choices.

Bottom line: We can guide AI with our values—without revealing who chose what.

Paper: http://arxiv.org/abs/1906.01562v2

Register: https://www.AiFeta.com

#AI #Ethics #Privacy #DifferentialPrivacy #AutonomousVehicles #Crowdsourcing #DataProtection #ResponsibleAI

Read more