In Turning Video into Traffic Data Part One, I wrote about Miovision’s systematic method for processing the large amount of video that is uploaded to our system. I detailed our three step process for video configuration, quality assurance, and data validation, and explained how computer vision is used to detect vehicle movements from video. If you haven’t yet read Part One, I would recommend you start there.
In this second-and-final post, I will be diving into the details of data accuracy, how we account for error, how we develop our best-in-class algorithm, and how that helps our customers rely on the quality of Miovision data for any project of any size.
Deconstructing a Frame of Video into Spatial Regions for Counting
When video is uploaded to Miovision, cardinal direction and number of lanes are required inputs. That is because each video is split into video segments to be processed individually.
Each video segment is determined by spatial region, lane and approach. Segments are then distributed through a number of processes on a cloud computing service and queued for distribution to a computer vision task.
When computer vision tasks are complete, each video segment is queued for human review and verification. Humans manually count a 12% cross-section from each hour of video to ensure that the computer vision algorithm is properly producing counts and the data is accurate.