Rishab Nayak, a junior majoring in computer science (CS) and chemistry at Boston University, wants to build technologies that can be deployed at scale for social good, particularly in healthcare. When asked what he would do if he could do anything in life, Nayak says he would “create the tricorder from Star Trek—a small device that can [read a medical] diagnosis and prognosis.” He shares his mission to innovate technology solutions to help others with two friends he knows from Bangalore, India—UMass Amherst CS students Aditya Narayanan and Abhinav Tripathy.
The trio was exposed to the needs of the visually impaired in April 2018 at Perkins Hacks—a hackathon for solving real-world problems hosted by Perkins School for the Blind in Watertown, Massachusetts. Nayak and his friends had grandparents with sight and/or mobility disabilities, so they wanted to invent a wearable device that helps older adults and people who are low vision to safely navigate spaces—“an autopilot for humans.”
The team developed their idea at HackHarvard in October 2018, using Cloud Storage, Google Cloud’s unified object storage service, and Firebase, Google’s app development platform. Their wearable prototype used a microcontroller with a camera to split the video into frames on the device. The device would then send the frames to a Cloud Storage bucket to perform analysis before offering audio feedback about people and objects surrounding the user. “It was easy to learn how to use Cloud Firestore as our NoSQL document database during the hackathon,” Nayak says. “We used the onSnapshot( ) method to listen to a document, which allowed us to update a database entry from anywhere and receive the update in real time on another device."
Their device, called Stepify, won the Best Use of Google Cloud prize, which led to a transcontinental partnership with like-minded students more than 3,500 miles away.
Teaming up on a shared vision
Sara Rossato De Cesaro is a third-year architecture and urbanism student at Faculdade Meridional Passo Fundo IMED in Brazil. “Through my college research, I had more contact with people with visual impairment. I realized the difficulties they had,” she says. “So when I had the opportunity to suggest a theme for a hackathon project, I had those people in mind.”
De Cesaro brought together several university students to develop a wearable device for people with visual impairments. She and her team—Joicy Carvalho, Centro Universitário Patos de Minas (UNIPAM); Felipe das Chagas Silva, Centro Universitário Don Domênico (UNIDON); and Matheus Pereira dos Santos, Universidade Federal de Uberlândia (UFU)—competed at HackLab FNESP in September 2018 in São Paulo.
The team developed a prototype for special glasses connected to a headset and a mobile app that helps students who are visually impaired to navigate classroom environments. Their project, Vivir, won first prize at the hackathon. Google for Education’s Fernando Cruz, who had seen both the Boston and Brazil teams’ work, brought the two groups together for an international collaboration.
Heading in a new direction
The team worked with a Google Cloud technical mentor, software engineer Bo Shi, who helped them rethink some of their original concepts and create a scalable product using Google Cloud. Andrea Quadrado Mussi from IMED served as the teams’ faculty mentor.
“When I met team Stepify, I found their project very interesting and close to our idea,” De Cesaro says. “I saw a great opportunity to work with people with different ideas and backgrounds, to be able to aggregate all our knowledge into one project.”
Concerned over high manufacturing costs and complex hardware-software integrations, Shi steered the students away from creating a wearable device. Instead, he guided them toward using Google Cloud to create a mobile app for people who are visually impaired. “Instead of users having to buy another device, they could use something they had their whole lives—their cellphones,” Nayak says.
The Brazilian team brought expertise in marketing and business development, while the U.S. team handled software development. Their shared goal was to help people who are visually impaired safely navigate indoor and outdoor spaces. “I imagined the application would aid people with visual impairments with mobility and spatial navigation,” De Cesaro says, “with audio describing their environments, including obstacles found in their path.” They named their new mobile app EyeSpace.
“I saw a great opportunity to work with people with different ideas and backgrounds, to be able to aggregate all our knowledge into one project.”Sara Rossato De Cesaro, Student, Faculdade Meridional Passo Fundo IMED
Realizing new possibilities
Nayak and his UMass colleagues built Stepify with Google Cloud. So it was natural to turn to Google Cloud again to create the EyeSpace app. They used machine learning products including AI Platform and Cloud AutoML Vision, and Firebase products including Firebase Authentication, Firebase Realtime Database, and Google Analytics for Firebase to get started.
“We used Cloud AutoML for our vision models and deployed them on to our devices using AutoML Vision Edge,” Nayak says. “We download the vision model on the device from the cloud and run it locally. That lets us have much quicker results in real time.”
They also developed a chatbot using Google Cloud’s Dialogflow. “The chatbot is built using Dialogflow, and it exposes both voice and text input options to the user,” Nayak explains. “The user can either speak to the chatbot and receive an audio response, or type to the chatbot and receive an audio response. For example, if a user is walking down the street [and wants to know what is in his or her path], they could either type or speak to the chatbot, asking ‘Is there an obstruction ahead?’ and the chatbot would respond via audio."
The team used Google’s Flutter SDK (software development kit), a UI toolkit that lets developers build natively compiled mobile applications from a single codebase. The team wrote some 10,000 lines of code to create EyeSpace for Android and iOS.
“Bo Shi helped direct our project to a workable product in a short time,” De Cesaro says. “With our custom-trained vision models, our app allows the user to ‘see’ using their phone camera.”
The EyeSpace team used G Suite’s collaboration and productivity tools to keep their work flowing, including Google Drive, Google Docs, and Google Sheets, and Google Meet for video conferencing. In April 2019, they did a limited beta release to test the app with a small group of visually impaired people through a nonprofit in Brazil.
User feedback indicated the need to fine-tune the app’s ability to detect terrain. “We designed the vision model to differentiate between a sidewalk, a gravel path, stairs, or just plain road,” Nayak says. Users also wanted increased speed in the app’s response time in real-life situations. To make the app faster, the team decided to make the ML model downloadable on mobile devices so users could run it locally, which would reduce the time needed to communicate over a network. “Now, we reduced our detection time from about three seconds to 20 milliseconds.”
EyeSpace currently supports English and Portuguese. For the next phase of development, the team is using ML Kit’s on-device translation API, which can dynamically translate text between 59 languages, to make EyeSpace accessible worldwide. “We want EyeSpace to be fully internationalized,” Nayak says, looking ahead to a public beta release. They’d also like to expand the app’s functionality to help people with daily tasks, such as choosing clothing based on colors.
To other students, developers, and entrepreneurs, Nayak says, “If the first platform you're using is Google Cloud, then that's probably the best choice you've made. There are many, many resources to choose from. Most of it is pretty easy to use and you can build a lot of cool stuff...quickly.”
“If the first platform you're using is Google Cloud, then that's probably the best choice you've made. There are many, many resources to choose from.”Rishab Nayak, Student, Boston University