A tool to filter a large amount of video clip data along semantic attributes. Redfish allows autonomous driving engineers to find relevant data for driving scenarios, quickly.
When software engineers develop an autonomous driving algorithm, they have to test it against a vast variety of driving scenarios. Collecting large amounts of driving data through sensor recordings and camera clips is comparatively simple. But if you look at the data, how do you filter all recordings by, e.g.: three or more pedestrians, one left turn, high speed, at night?
The team at MX Automotive – a Merantix company – use clever machine learning techniques to analyze driving data. They derive semantic attributes for every frame. The result is a rich data set for autonomous driving engineers. We set out to built a filtering tool on top of that dataset. "Redfish" allows formulating queries to find matching video clip data. Using Redfish, engineers are also able to play the clips, to check each frame against semantic attributes, and finally to export relevant clip data.
I've built the web frontend for Redfish in 10 days, using typescript, react, hooks, and more. The tricky part was matching the video frame data with the closest semantic attributes to that frame in order to synchronize updates between map, data table, and video player. I liked the project a lot, because working with the whole team at MX Automotive was a breeze.