A tool to filter a large amount of video clip data along semantic attributes. Redfish allows autonomous driving engineers to find relevant data for driving scenarios, quickly.
To develop an autonomous driving algorithm, you have to test it in millions of driving scenarios. Large amounts of sensor recordings and video clips are easy to collect. But how do you get all recordings with three or more pedestrians, one left turn, high speed, and happening at night?
The team at Merantix uses clever machine learning techniques to analyze driving clips. They derive semantic attributes for every frame. The result is a rich data set for autonomous driving engineers. So we set out to built Redfish, a web tool that allows to formulate queries, and to find matching video clip data. Using Redfish, engineers are able to play the clips, to check each frame against semantic attributes, and finally to export relevant clip data. Now, an autonomous driving algorithm can be tested in those scenarios.
I've built the Redfish web interface in 10 days, using typescript, react, hooks, and more. The tricky part was matching the video frame data with the closest semantic attributes to that frame in order to synchronize updates between map, data table, and video player. I liked the project a lot, because working with the whole team at Merantix was a breeze.