How we built ethics into our facial recognition app Amanda AI

Checking in with Amanda AI
fb twitter linkedin
link
Copied!

At TTT, we take ethics seriously. That’s why we’re proud of Amanda AI, our innovative event check-in technology that uses facial recognition to give event attendees a better experience. 

From the beginning, we designed Amanda AI to intrinsically respect privacy. Unfortunately, not everyone has the same ethical commitments.

Clearview AI is a facial recognition tool that started to make headlines in February 2020. The app takes someone’s image and scans its database of over 3 billion photos to identify the person. Although the underlying technology is very effective, there are consent concerns surrounding their use of people’s photos which they scraped from the Internet and serious privacy concerns regarding its use for surveillance.

While Clearview AI claims it developed the app for law enforcement, hackers obtained Clearview’s client list and revealed that the retailers Best Buy and Macy’s had access to the powerful tool. 

In response, the Canadian government announced an investigation into the use of Clearview AI by the Canadian police. This investigation might set a precedent for how facial recognition law enforcement technology is used.

Clearview AI might work a little too well. Some old photos appear of this CNN journalist who searched herself on the Clearview app. Watch the full video on CNN Business.

Facial recognition necessitates a very serious discussion of ethics and privacy involving both the developers and users of facial recognition. That’s why at TTT, we’re a committed member of the Artificial Intelligence network of British Columbia where we are spearheading ethics for facial recognition. We do everything in our power to make sure Amanda AI’s use of facial recognition intrinsically respects privacy and consent.

You upload your photo to Amanda AI.

Amanda AI requires your consent. We only use photos you choose to upload. 

For example, if you attended the Traction Conference in the summer of 2019 and wanted to use facial recognition for a speedy check-in, you would have received an email in advance allowing you to upload a photo. This photo would be analyzed for your key biometrics, then the photo is deleted. After the event ends, the facial recognition biometrics are deleted. It would have been completely optional.

According to our Traction Conference event recap, “Traction gave event attendees the choice to opt-in to facial recognition and upload a photo at ticket purchase… allowing attendees to give consent and upload an image of themselves.” 

Using Amanda AI to check-in at Traction Conference 2019. The attendee must opt-in over email before they arrive at the event.

While Amanda AI requires you to opt-in, a concern about facial recognition for law enforcement is that mugshots, surveillance footage or photos scraped from the internet are accessed without consent. Once you’re in a facial recognition database, it doesn’t matter if you want to remain anonymous — you’re in there.

The ethics of acquiring photos and video is important, but so is media storage.

We don’t store photos.

While facial recognition is still a new technology, we’re already in an age of data leaks.

Leaks happen. Users are the ones who suffer despite the efforts of companies like Facebook, LinkedIn, MySpace, Equifax and others to avoid it.

Search here or here to check if your email, your favourite password or other info has been compromised in a data breach.

Amanda AI’s approach to storing your data is inherently safe. We don’t store photos, only mathematical representations of your facial identifiers. You can’t leak photos you don’t store. 

How law enforcement facial recognition companies store their photos is not public information, but if they can identify and produce photos or videos of people of interest, they must store photos in some form, inherently putting their database at risk of a leak. 

Also, your biometric data is not shared with anyone. Neither the host of the conference, boothing organizations nor attendees ever have access to this data.

Another way Amanda AI handles data ethically is that we don’t keep user information indefinitely.

We destroy our database of users after every event.

Once the conference ends, we delete your data. It’s as simple as that. 

Again, to be clear, we don’t store photos — only digital representations of your facial features used for identification. Your photo is deleted immediately – it’s the extracted identifying information which is kept for use until your event ends. 

There are many facial recognition apps that rely on maintaining databases of biometrics. First of all, an AI algorithm needs to be trained on large databases to be effective. This could mean getting access to police databases of mug shots which, for example, was given to FaceFirst in San Diego. This led to California recently passing a 3-year ban on facial recognition use by police forces.

How databases are managed will be a key element in developing trust with users.

Trust is key to the future of facial recognition

In all honesty, it isn’t the underlying facial recognition technology that’s the problem, it’s the potential threat to privacy.

After all, the various controversies prove Clearview AI has already been used unethically. The mishandling of highly sensitive biometric data suggests we are far from a final agreement on facial recognition ethics so at TTT we prefer having open conversations about AI ethics

This emergence of facial recognition technology means your face identifies you with the same precision as your DNA. It’s as personally identifying as it gets and we want to respect that.

With great technology comes great responsibility and TTT is committed to providing ethical, user-centric facial recognition.