Students use Google Cloud to create a breakthrough app for predicting epileptic seizures

Computer science (CS) students in the San Francisco Bay Area used a local hackathon to create an app that can predict epileptic seizures for those who suffer them regularly. Initially relying on Google Cloud for their web app, the team created their v2 as a mobile app using Android Studio. And they aren’t done yet.

Four CS students who had never met, a two-day hackathon, and a minimally supported hardware device for detecting electrical signals in the brain—perhaps not the ideal conditions for developing an app that predicts epileptic seizures. But for Qianyun “Aria” Chang, a then graduating senior at the University of California, Davis, and Tejas Shah, a sophomore at Diablo Valley College in the San Francisco Bay Area, those conditions proved more than enough. Along with teammates Rutuja Oza and Dhawal Majithia, a sophomore and a senior at UC Davis, respectively, they not only built a functioning web app using Google Cloud, they also won a top prize at HackDavis 2019.

“Tejas had experience in back-end and mobile development,” Chang explains, “which complemented my experience in machine learning development and predictive analytics. I thought we would be able to create something awesome together.”

A window into brain activity

At an earlier hackathon, Chang was fascinated by a team using a headband to monitor brain waves while the wearer was sleeping, eating, and drinking coffee, with changes in the brain displayed on a monitor. When she described this to Tejas Shah, he remembered that he had such a device at home. “My dad had bought the Muse headband a few years ago, thinking it could help my grandma with feedback and meditation,” says Shah. The Muse device is designed to measure brain activity via EEG (electroencephalography) sensors that generate audio feedback to help guide the user toward a relaxed state of mind for meditation. However, the team quickly learned that their version of the Muse headband was a bit dated and only minimally supported by the manufacturer. “We really did not have a clear idea if this would work,” says Shah. “So we decided to build something anyway, finish the project, and continue working on it later.” “The hardware itself wasn’t our primary concern,” says Chang. “There would be other alternatives we could look into after the hackathon. We saw this as a long-term project.” Using bluetooth, the team connected the Muse headband to a phone that uses the Open Sound Communication OSC) protocol to upload EEG data to a local computer. Next, they built the back-end app to process that data on Google App Engine. That required selecting and training a model that would serve as the basis for seizure prediction. They used Google TensorFlow and the neural network library Keras to build a deep learning model from scratch, and they hosted it on Google Cloud ML Engine.

“We implemented a model based on a research paper*,” Chang explains. “The use of EEG signals to predict seizures is pretty new, and most of the research is done in the lab without being tested on actual patients. So we were trying to bridge the gap between the current research and something that people can actually use.”

How Brainalyzer works

Chang and her team bridged that gap with Brainalyzer, their initial web app. As a user wears the Muse headband, EEG signals serve as inputs for the team’s model. Using Long Short Term Memory (LSTM), a deep learning neural network capable of learning long-term dependencies, Brainalyzer then analyzes the EEG waves as a time series task and tries to predict the future occurrence of a seizure. After ML Engine processes the data, the web app launches a chart that displays the probability of an epileptic seizure occurring at different times in the near future.

Image: Diagram 1: The Muse headband transmits the wearer’s EEG signals to ML Engine via bluetooth for processing. Upon analysation, the Brainalyzer app shows the user a unique seizure prediction chart.

“With optimal hyperparameter selections, our model can predict new patients' wave patterns and whether or not a seizure is likely to occur at a future timestamp with 91% accuracy,” says Chang, noting that this accuracy percentage was based on the AI training model, not predictions for the human testers during the hackathon. “Though the model can predict zero future seizure episodes on tested subjects at the hackathon with high confidence level, as expected, we have yet to test the model on subjects with epileptic tendencies to further evaluate the model performance.” While projects at HackDavis can win with a strong presentation, or even just a PowerPoint that illustrates a strong idea, the Brainalyzer team built a minimally viable product by the end of the two-day event. “We got the headband to work,” says Chang. “It was able to give us the input we needed, and we also got the machine learning part to work, as well. So it all came together in the end for us.”

Going mobile with Android

The Brainalyzer app won for “Best Use of Google Cloud” at HackDavis. But that was just the start. The team was committed to creating their v2 as a mobile app that would be more user-friendly than the original web app, and potentially make it publically available. To do that, Chang and Shah rolled up their sleeves and dug into Android Studio.

“First we designed the UI so we could drag and drop our front end, then graphs and other text we wanted to include,” says Shah. “We then optimized for the Muse headband, which is simply connected via Bluetooth. We had to do a lot of testing, making sure it was connected properly and receiving data. We found that actual testing goes really fast—I could make a change and compile it, but if I wanted to make just a small change in one line of code, Android Studio gives you Instant Run, which allows you to make a change on the fly.”

The Muse headband transfers data onto a phone, which then sends an HTTP call to a Google Cloud Function that serves as a gateway to the model. Shah explains: “It asks the model, ‘Are there any seizures within this data set?’ As far as deploying the code itself for Cloud Functions, it was a pretty painless experience.” The results appear in a mobile-friendly graph that users can easily review.

Ease of use for developers on Google Cloud

Shah was happy that Google Cloud has anticipated for much of the pain developers often experience when building interdependent apps: “The two biggest points are ease of use—the UI and the ability to quickly locate the different services you need, for example—and the tremendous amount of support for Machine Learning and related services. I knew it was going to be hard to export and deploy models. It was definitely nice to have these problems handled automatically by Google Cloud,” explains Shah.

A potential healthcare breakthrough

When it came time for judging at the hackathon, the Brainalyzer team had a moment to step back and think deeply about the implications and applications of their product. “The judges wanted to know how we plan on publishing it to the market,” says Chang. “How can users benefit? Do we plan to keep working on it? This product could benefit people who frequently experience life-threatening seizures, and without an app like this the seizures can come with no warning. And that goes far beyond the thrill of winning a hackathon.”

Although Chang and Shah completed v2 of Brainalyzer as an Android app, they still feel they can do much more with the product. Next steps will include exploring more powerful brain-scanning devices that can provide accurate inputs and predictive capabilities.

Shah reveals, “When you’re at a hackathon, you usually think about building the product and whether you might win. But when you're trying to build something that helps people, especially those you know personally, it gives you a warm feeling on the inside, as if you just struck gold.”

*Tsiouris, K.M. et al. (2018). A Long Short-term Memory deep learning network for the prediction of epileptic seizures using EEG signals. (Computers in Biology and Medicine). DOI: https://www.sciencedirect.com/science/article/pii/S001048251830132X

“Google’s Vision API is a good example of the efficiency Google Cloud has built into key processes. The API provides data storage for you to upload all your training data easily, the training process takes very little time.”

Qianyun “Aria” Chang, 2019 graduate, University of California, Davis

“When you're trying to build something that helps people, especially those you know personally, it gives you a warm feeling on the inside, as if you just struck gold.”

Tejas Shah, student, Diablo Valley College

Get started with Google Cloud’s higher education learning center at: g.co/learncloud/programs