How to model the probability of detecting an image, given it is seen multiple times

Are there any existing methods/models describing the probability of an object being detected by a computer vision algorithm given it is seen $n$ times at similar angles and orientations? I know that an autonomous car may, for example, have trouble recognizing a stop sign and as a result the image recognition system being used may have a bounded box around the stop sign continuously appearing and disappearing signaling that the object recognition algorithm is only detecting the stop sign a certain amount of the time it sees it. I would like to understand this phenomenon in a more general sense.

More precisely put, I would like to model that: given an object is detected with probability $p$ if it is seen 'once' (AKA during a moment in time), then what is the probability of it being detected if it is seen for a second time, a third time, etc... Each 'moment in time' may be characterized by a single video frame/image or some other unit of time. Each time the image is seen it may be in a similar but not identical orientation.

Topic self-driving object-detection probability computer-vision predictive-modeling

Category Data Science


An approach that is not specific to the image domain is to use a probabilistic data structure like a Count-Min Sketch. A Count-Min Sketch data structure can accumulate information to estimate the observed frequency of an input value based on the past set of input values by using multiple hashing functions over the input.

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.