TIME-VARYING-GEOMETRY OBJECT SURVEILLANCE USING A MULTI-CAMERA ACTIVE-VISION SYSTEM

Publications

Share / Export Citation / Email / Print / Text size:

International Journal on Smart Sensing and Intelligent Systems

Professor Subhas Chandra Mukhopadhyay

Exeley Inc. (New York)

Subject: Computational Science & Engineering, Engineering, Electrical & Electronic

GET ALERTS

eISSN: 1178-5608

DESCRIPTION

8
Reader(s)
14
Visit(s)
0
Comment(s)
0
Share(s)

VOLUME 1 , ISSUE 3 (September 2008) > List of articles

TIME-VARYING-GEOMETRY OBJECT SURVEILLANCE USING A MULTI-CAMERA ACTIVE-VISION SYSTEM

Matthew Mackay * / Robert G. Fenton / Beno Benhabib

Keywords : Surveillance, Sensing-System Reconfiguration, Active Vision, Form Recognition

Citation Information : International Journal on Smart Sensing and Intelligent Systems. Volume 1, Issue 3, Pages 679-704, DOI: https://doi.org/10.21307/ijssis-2017-314

License : (CC BY-NC-ND 4.0)

Published Online: 13-December-2017

ARTICLE

ABSTRACT

This paper presents a novel, agent-based sensing-system reconfiguration methodology for the recognition of time-varying-geometry targets (objects or subjects). A multi-camera active-vision system is used to improve form-recognition performance by selecting near-optimal viewpoints along a prediction time horizon. The proposed method seeks to maximize the target visibility in a cluttered, dynamic environment. Simulated experiments clearly show a tangible potential performance gain.

Content not available PDF Share

FIGURES & TABLES

REFERENCES

[1] K.A. Tarabanis, P.K. Allen, and R.Y. Tsai, “A Survey of Sensor Planning in Computer Vision,” IEEE Transactions on Robotics and Automation, vol. 11, no. 1, pp 86–104, Feb. 1995.
[2] J. Miura and K. Ikeuchi, “Task-Oriented Generation of Visual Sensing Strategies in Assembly Tasks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 2, pp. 126-138, Feb. 1998.
[3] M.D. Naish, E.A. Croft, and B. Benhabib, “Coordinated Dispatching of Proximity Sensors for the Surveillance of Maneuvering Targets,” Journal of Robotics and Computer Integrated Manufacturing, Vol. 19, No. 3, pp. 283-299, 2003.
[4] S. Sakane, T. Sato, and M. Kakikura, “Model-Based Planning of Visual Sensors Using a Hand-Eye Action Simulator: HEAVEN,” Proc. of Conf. on Advanced Robotics, pp. 163–174, Versailles, France, Oct. 1987.
[5] C.K. Cowan and P.D. Kovesik, “Automated Sensor Placement for Vision Task Requirements,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 3, pp. 407-416, May 1988.
[6] R. Bodor, P. Schrater, and N. Papanikolopoulos, “Multi-Camera Positioning to Optimize Task Observability,” Proc. of IEEE Conf. on Advanced Video and Signal Based Surveillance, pp. 552-557, 2005.
[7] S. Yu, D. Tan, and T. Tan, “A Framework for Evaluating the Effect of View Angle, Clothing, and Carrying Condition on Gait Recognition,” Proc. of Int. Conf. on Pattern Recognition, pp. 441-444, Hong Kong, 2006.
[8] L. Hodge and M. Kamel, “An Agent-Based Approach to Multi-sensor Coordination,” IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, vol. 33, no. 5, pp. 648-662, Sept. 2003.
[9] D.P Anderson, “Efficient Algorithms for Automatic Viewer Orientation,” Comp. & Graphics, vol. 9, no. 4, pp. 407-413, 1985.
[10] S. Sakane, T. Sato, and M. Kakikura, “Model-Based Planning of Visual Sensors Using a Hand-Eye Action Simulator: HEAVEN,” Proc. of Conf. on Advanced Robotics, pp. 163-174, Versailles, France, Oct. 1987.
[11] C.K. Cowan and P.D. Kovesik, “Automated Sensor Placement from Vision Task Requirements,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 10, no. 3, pp. 407-416, May 1988.
[12] D.P Anderson, “Efficient Algorithms for Automatic Viewer Orientation,” Comp. & Graphics, vol. 9, no. 4, pp. 407-413, 1985.
[13] M.K. Reed and P.K. Allen, “Constraint-Based Sensor Planning for Scene Modeling,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 12, pp. 1460-1467, Dec. 2000.
[14] L. Hodge and M. Kamel, “An Agent-Based Approach to Multi-sensor Coordination,” IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans, vol. 33, no. 5, pp. 648-662, Sept. 2003.
[15] T. Urano, T. Matsui, T. Nakata, and H. Mizoguchi, “Human Pose Recognition by Memory-Based Hierarchical Feature Matching,” Proc. of IEEE International Conference on Systems, Man, and Cybernetics, pp. 6412-6416, The Hague, Netherlands, 2004.
[16] M.D. Naish, E.A. Croft, and B. Benhabib, “Coordinated Dispatching of Proximity Sensors for the Surveillance of Maneuvering Targets,” Journal of Robotics and Computer Integrated Manufacturing, vol. 19, no. 3, pp. 283-299, 2003.
[17] R. Murrieta-Cid, B. Tovar, and S. Hutchinson, “A Sampling-Based Motion Planning Approach to Maintain Visibility of Unpredictable Targets,” Journal of Autonomous Robots, vol. 19, no. 3, pp. 285-300, 2005.
[18] J. R. Spletzer, and C. J Taylor, “Dynamic Sensor Planning and Control for Optimally Tracking Targets,” Int. Journal of Robotic Research, vol. 22, no. 1, pp. 7-20, Jan. 2003.
[19] M. Kamel, and L. Hodge, “A Coordination Mechanism for Model-Based Multi-Sensor Planning,” Proc. of the IEEE International Symposium on Intelligent Control, pp. 709-714, Vancouver, Oct. 2002
[20] J. Spletzer and C. J. Taylor, “Sensor Planning and Control in a Dynamic Environment,” Proc. of IEEE Int. Conf. Robotics and Automationpp. 676–681, Washington, DC, 2002.
[21] S.G. Goodridge, R.C. Luo, and M.G. Kay, “Multi-Layered Fuzzy Behavior Fusion for Real-Time Control of Systems with Many Sensors,” IEEE Transactions on Industrial Electronics, vol. 43, no. 3, pp. 387-394, 1996.
[22] S. G. Goodridge and M. G. Kay, “Multimedia Sensor Fusion for Intelligent Camera Control,” Proc of IEEE/SICE/RSJ Multi-sensor Fusion and Integration for Intelligent Systems, pp. 655-662, Washington, DC, Dec. 1996.

EXTRA FILES

COMMENTS