The group calls the system “real-time motion planning for aerial videography”, and allows a developer to define basic parameters of a shot, such as the size or width of the frame or the position of the subject in that picture. They can also modify these parameters on the fly and the drone adds how it is capturing accordingly. And, of course, this drone can energetically avoid complications.
While some consumer drones like DJI Mavic Pro already have object recognition and tracking, the MIT project is distinguished by the addition of more robust versions of these technologies and a large amount of granular control. The system measures and constantly estimates the speeds of objects moving around the drone, and this is done 50 times per second.
The researchers say that a manager using their system would be able to weigh some variables, so this drone recognizes what to prioritize in a shot, too. From the MIT version:
Unless the actors are well choreographed, the distances between them, the orientations of their bodies and their distance from the obstacles vary, making it impossible to respect all the constraints simultaneously. But the user can specify how the different factors should be weighed against each other. Preserving the relative locations of actors on the screen, for example, could be more important than maintaining a precise distance or alternatively. The user can also assign a weight to minimize occlusion, making sure that one actor does not eventually block another from the camera.
It is a brilliant idea that is both reminiscent and apparently a natural extension of the work of the virtual camera that directors like James Cameron have helped the pioneer and others (like Gareth Edwards and Lucasfilm) use since. It is certainly not ready for this type of work, judging by the video of CSAIL. But this is another important trick in the way new hardware and software alter cinema, big or small.
0 comments:
Post a Comment