Chatbot helps refugees claim asylum

This article first appeared on The Guardian. theguardian.com

Robot lawyer chatbot

The creator of a chatbot which overturned more than 160,000 parking fines and helped vulnerable people apply for emergency housing is now turning the bot to helping refugees claim asylum.

The original DoNotPay, created by Stanford student Joshua Browder, describes itself as “the world’s first robot lawyer”, giving free legal aid to users through a simple-to-use chat interface. The chatbot, using Facebook Messenger, can now help refugees fill in an immigration application in the US and Canada. For those in the UK, it helps them apply for asylum support.

The London-born developer worked with lawyers in each country, as well as speaking to asylum seekers whose applications have been successful.

Browder says this new functionality for his robot lawyer is “long overdue”. He told the Guardian: “I’ve been trying to launch this for about six months – I initially wanted to do it in the summer. But I wanted to make sure I got it right because it’s such a complicated issue. I kept showing it to lawyers throughout the process and I’d go back and tweak it.

“That took months and months of work, but we wanted to make sure it was right.”

Browder began working on this project before Donald Trump’s election as US president but he said he feels it’s more important now than ever. “I wanted to add Canada at the last minute because of the changes in the political background in the US,” he said.

The chatbot works by asking the user a series of questions, in order to determine which application the refugee needs to fill out and whether a refugee is eligible for asylum protection under international law.

After this, it takes down the necessary details required for the appropriate asylum application – an I-589 for the United States or a Canadian Asylum Application for Canada. Those in the UK are told they need to apply in person, and the bot helps fill out an ASF1 form for asylum support.

Browder says it was crucial the questions were in plain English. “The language in these forms can be quite complicated,” he said.

These details are used to auto-fill an application form for either the US, Canada or the UK. “Once the form is sent off, the details are deleted from my end,” said Browder.

The 20-year-old chose Facebook Messenger as a home for the latest incarnation of his robot lawyer because of accessibility. “It works with almost every device, making it accessible to over a billion people,” he said.

Browder acknowledges Messenger doesn’t come without its pitfalls. Unlike some other chat apps, it’s not automatically end-to-end encrypted. Browder says there is, however, end-to-end encryption between his server and Facebook. He added: “Ideally I would love to expand to WhatsApp when their platform opens up, particularly because it’s popular internationally.”

Once the application is sent, the data is destroyed from his servers within 10 minutes of someone using the bot.

The next step is making the service available in more languages. Browder is currently working on translating it into Arabic.

Immigration lawyer Sophie Alcorn welcomed DoNotPay’s latest venture. She said: “As an immigration attorney, I can see the major benefits that leveraging sophisticated chatbot technology will have in the asylum application process.

“It will be easier for applicants to submit their applications and it will empower legal aid organisations to assist a larger numbers of clients.

“Asylum seekers want to follow the laws and do everything properly, and this technology will help them do so.”

DoNotPay was initially a free service that guided people with parking fines through the appeals process.

The chatbot was later programmed to deal with other legal issues, such as claiming for delayed flights and trains and payment protection insurance (PPI). As of August 2016, it also helps with housing issues. The homelessness bot has had more than 3,000 users, with more than 240,000 messages sent and received.

Browder runs DoNotPay alongside his studies at Stanford University. He said: “My degree has become a bit of a side project these days.”

The Internet of Things Meets Barbie

“Hello Barbie” was released on 14 February at a toy fair in America- she’s wi-fi enabled and records kids conversations to develop authentic, real-time responses to them. While the tech press and others are dubbing her “eavesdropping Barbie” and “creepy” she’s not the first doll to be internet enabled.  See Cayla, a talking doll that uses speech-recognition and Google’s translation tools, that was subsequently hacked. Besides the obvious questions around privacy and safety, what does this mean for the future of play?

Cruel Algorithms

This post originally appeared on Slate.com written by Eric Meyer
Slate.

I didn’t go looking for grief on Christmas Eve, but it found me anyway, and I have designers and programmers to thank for it. In this case, the designers and programmers are somewhere at Facebook.

I know they’re probably very proud of the work that went into the “Year in Review” app they designed and developed, and deservedly so — a lot of people have used it to share their highlights of 2014. I kept seeing them pop up in my feed, created by various friends, almost all of them with the default caption, “It’s been a great year! Thanks for being a part of it.” Which was, by itself, a little bit unsettling, but I didn’t begrudge my friends who’d had a good year. It was just a weird bit of copy to see, over and over, when I felt so differently.

year in review app

Still, it was easy enough to avoid making my own Year in Review, and so I did. After all, I knew what kind of year I’d had. But then, the day before Christmas, I went to Facebook and there, in my timeline, was what looked like a post or an ad, exhorting me to create a Year in Review of my own, complete with a preview of what that might look like.

Clip art partiers danced around a picture of my middle daughter, Rebecca, who is dead. Who died this year on her sixth birthday, less than 10 months after we first discovered she had aggressive brain cancer.

Yes, my year looked like that. True enough. My year looked like the now-absent face of my Little Spark. It was still unkind to remind me so tactlessly, and without any consent on my part.

I know, of course, that this is not a deliberate assault. This inadvertent algorithmic cruelty is the result of code that works in the overwhelming majority of cases, reminding people of the awesomeness of their years, showing them a selfie at a party or whale spouts from sailing boats or the marina outside their vacation house.

But for those of us who lived through the death of loved ones, or spent extended time in the hospital, or were hit by divorce or foreclosure or job loss or any one of a hundred possible crises, we might not want another look at this past year.

To show me Rebecca’s face surrounded by partygoers and say “Here’s what your year looked like!” is jarring. It feels wrong, and coming from an actual person, it would be wrong. Coming from code, it’s just unfortunate. These are hard, hard problems. It isn’t easy to programmatically figure out if a picture has a ton of Likes because it’s hilarious, astounding, or heartbreaking.

Algorithms are essentially thoughtless. They model certain decision flows, but once you run them, no more thought occurs. To call a person “thoughtless” is usually considered a slight, or an outright insult; and yet, we unleash so many literally thoughtless processes on our users, on our lives, on ourselves.

Where the human aspect fell short, in this case, was in pushing the preview image into my Facebook timeline without first making sure I wanted to see it. I assume Facebook only showed the ad to users who hadn’t already created a Year in Review, in an attempt to drive more adoption. So the Year in Review ad kept coming up in my feed, rotating through different fun-and-fabulous backgrounds but always showing Rebecca, as if celebrating her death, until I finally clicked the drop-down arrow and said I didn’t want to see it any more. It’s nice that I can do that, but how many people don’t know about the “hide this” option? Way more than you think.

This whole situation illuminates one aspect of designing for crisis, or maybe a better term is empathetic design. In creating this Year in Review ad, there wasn’t enough thought given to cases like mine, or friends of Chloe, or really anyone who had a bad year. The ad’s design was built around the ideal user—the happy, upbeat, good-life user.

It didn’t take other use cases into account. It may not be possible to reliably predetect whether a person wants to see their year in review, but it’s not at all hard to ask politely—empathetically—if it’s something they want. That’s an easily solvable problem. Had the ad been designed with worst-case scenarios in mind, it probably would have done something like that.

To describe two simple fixes: First, don’t prefill a picture into the preview until you’re sure the user actually wants to see pictures from their year. And second, instead of pushing a preview image into the timeline, maybe ask people if they’d like to try a preview—just a simple yes or no. If they say no, ask if they want to be asked again later, or never again. And then, of course, honor their choices.

As a Web designer and developer myself, I decided to blog about all this on my personal Web site, figuring that my colleagues would read it and hopefully have some thoughts of their own. Against all expectations, it became an actual news story. Well before the story had gone viral, the product manager of Facebook’s Year in Review emailed me to say how sorry he and his team were for what had happened, and that they would take my observations on board for future projects. In turn, I apologized for dropping the Internet on his head for Christmas. My only intent in writing the post had been to share some thoughts with colleagues, not to make his or anyone’s life harder.

And to be clear, a failure to consider edge cases is not a problem unique to Facebook. Year in Review wasn’t an aberration or a rare instance. This happens all the time, all over the Web, in every imaginable context. Taking worst-case scenarios into account is something that Web design does poorly, and usually not at all. If this incident prompts even one Web designer out there decide to make edge cases a part of every project he or she takes on, it will have been worth it. I hope that it prompts far more than that.

Making Strangers Less Strange

20 Day Stranger

The MIT Media Lab’s Playful Systems Group and the Dalai Lama Center for Transformative Ethics have launched a new experiment called 20 Day Stranger.

The central hypothesis: Can a mobile application change the way we think about strangers?

Aim: The mobile app aims to create an intimate and anonymous connection between you and another person – a total stranger. Details – like name, age and address – will never revealed. For 20 days, both strangers are meant to continuously update each other about where they are, what they are doing, and eventually how they are feeling.

The rationale: In a world mediated through computing, our everyday lives are increasingly affected by complex and invisible systems. Some of these are algorithmic trades on the stock market, others are search results for information, movies, or a date. These systems often aspire to transparency, usability, and efficiency. Playful systems take a different approach, bringing the systems to the foreground as games, stories, narratives, and visualizations. Playful systems embrace complexity rather than conceal it, and seek to delight, not disappear.