My name is Jens Göring and I am part of the three developer team at LANA Labs from Berlin, Germany.
As part of our social responsibility we saw it as a necessity to contribute to the COVID-19 Hackathon.
Primarily elderly people are vulnerable to this new disease and need further assistance than the younger generations. This target group has in most cases difficulty using newer technology such as apps and websites. Therefore we developed a phone bot that connects people who cannot go outside to get food or medicine with volunteers who are willing to help them.
Our solution has two parts: One for the people in need and then the people who want to offer their help.
As a volunteer you can simply visit the website and see on the map where people are who need assistance.
We chose AWS to build and host our prototype, since we know it offered AWS Lex for speech recognition. One open question was how to redirect a phone number to AWS Lex, but we quickly learned that we can use AWS Connect for that.
Here, we defined a ‘Contact Flow’ which in its core connects the user with AWS Lex if he says ‘Help’:
The rest was just a matter of providing texts for success and failure cases that will be automatically spoken by AWS Polly. For AWS Lex, we had to define the variables and the questions the bot will ask in order to fill these variables with values. Also, we told Lex to call a certain Lambda function when it is done:
The lambda function to write the data is written in python, and simply passes the values of the variables from Lex on to the database (PostgreSQL). In between it is calling the geocoder to find the latitude and longitude for the given address string. This is necessary to display the markers on the map later. We used Pelias (https://github.com/pelias/pelias), an open source geocoder for this.
In order to display the data in a map for the potential helper, we created a small react.js app hosted on S3, which again read the data from the database via a lambda function. This time we also had to use AWS API Gateway to provide a public endpoint for the frontend.
For the map itself, we used a library called react-map-gl (https://github.com/visgl/react-map-gl) and MapBox (https://www.mapbox.com/) for the map data.
In summary, it was an existing, but also challenging project to do over the weekend. Also the project is not production ready, we hope that speech recognition can be used in more humanitarian projects in the future, in order to enable also the older generation to get the help they need in times of crisis!