Motor vehicle accidents can happen to anyone, at any time. In the fall of 2018, Karthik Kalyanaraman, who is studying computer science (CS) at the University of California, Berkeley, witnessed a car accident right in front of him and the dispute between drivers in its aftermath. His classmate and friend, Akash Singhal, had himself been involved in a hit-and-run accident a few years earlier. “Because of lack of evidence, we weren’t able to locate the driver,” Singhal recalls.
The students realized how frequently car accidents result in complicated legal, insurance, and law-enforcement issues—especially when it’s unclear who’s responsible. So they teamed up with Harshayu Girase and William Wong, fellow CS majors at UC Berkeley, to develop technology that could shed light on what happens in traffic accidents. To work on their initiative, they entered Cal Hacks 5.0 (advertised/promoted as the world's largest collegiate hackathon) in November 2018. “We decided to join as a team and build a cool product,” Singhal remembers.
Most vehicle accidents, Singhal explains, lack video documentation to show who is at fault. “We thought, ‘Well, cars now have all this technology—lots of cameras and lots of sensors,’” he says. “Why wouldn't we leverage that technology to create a connected community where we share crash footage with the appropriate authorities?” The four entered Cal Hacks with these questions in mind and “for 36 hours straight we basically built the entire platform”—hatching their solution, DashOwl.
“The expandability of the Google Cloud ecosystem was the biggest appeal.”Akash Singhal, student, University of California, Berkeley
Providing AI-powered crash analytics
Their hackathon project has since developed into an initiative its student founders hope to one day grow into a startup company. The team’s website defines DashOwl’s hardware-software combination as “artificial-intelligence-powered vehicle crash analytics.” Their platform receives input from a camera—either an existing onboard model or a DashOwl dashcam (a web camera that has been retrofitted into a unit that’s controlled by an Internet-connected microcomputer)—and parses and uploads that video stream to Cloud Storage, Google Cloud’s unified object storage service.
“We also collect the metadata involved, including location, time, and date and basically catalog it, and display it in a database in a website format that we have set up, all using Firebase,” Singhal explains. DashOwl then uses machine learning (ML) to analyze that data and detect whether a crash occurred, and if so, whether the camera(s) captured it.
“There's a whole emotional aspect, but really the important part is law enforcement,” Singhal says of DashOwl. “You can detect and understand who's at fault and rightfully charge and deliver justice. And there is also a monetary aspect, because if we can decide who's at fault, a lot of litigation ‘could’ be removed from the equation.”
DashOwl’s key components include the onboard dashcam, geotagging, cloud processing, accident archiving, expanding database, and secured storage. Such a complex platform might seem to be a daunting undertaking for college students, but the team’s familiarity with Google Cloud products extended back to their high school and even middle school years.
“Our decision to use Google Cloud was really intuitive, really natural.”Akash Singhal, student, University of California, Berkeley
Driving towards scale
“Our decision to use Google Cloud was really intuitive, really natural,” Singhal says. “But the expandability of the Google Cloud ecosystem was the biggest appeal.”
The team uses Cloud Storage for Firebase to store the video files as they are recorded by the onboard camera and Firebase Realtime Database to store video metadata including the latitude, longitude, date, and time of each video. They also used TensorFlow and over 1,200 training videos to develop and train their ML model, which determines whether the onboard cam video actually shows a crash, running that code on Compute Engine. TensorFlow served two main tasks: car detection and accident classification. The team chose the open source library because it has built-in object detection, segmentation, and classification algorithms they needed to detect vehicles in images. They also made use of the pretrained models, specifically one to detect cars, to solve the difficult challenge of training their own classifier on millions of data points.
“The biggest challenge was detecting crashes,” Singhal says. “It’s a very difficult machine learning issue, because you have to consider different aspects, such as time of day, the cars involved, and what actually counts as a crash. False positives are definitely an issue.”
The four classmates are refining DashOwl as they continue their studies and hope to eventually deploy it for large-scale testing. “We definitely want to spend more time improving our machine learning models,” Singhal says. “And we want to take it further, and potentially look for integrations with vehicle manufacturers.”
DashOwl also has the potential to extend beyond the vehicles directly involved in a crash. “With the advent of the connected car, almost every modern vehicle has a large suite of cameras that could serve as witnesses to accidents on the road,” the team’s Cal Hacks 5.0 submission noted. “But there exists no system for vehicles in the vicinity of the accident to identify and share crash footage, and [so that] valuable video evidence never reaches the victims.”
With over a billion drivers now on the road worldwide, DashOwl has the potential to ease legal and insurance disputes for many drivers.