Call for Paper

CAE solicits original research papers for the October 2021 Edition. Last date of manuscript submission is September 30, 2021.

Read More

Insights on Research-based Approaches in Human Activity Recognition System

Abdul Lateef Haroon P. S., U. Eranna. Published in Information Sciences.

Communications on Applied Electronics
Year of Publication: 2018
Publisher: Foundation of Computer Science (FCS), NY, USA
Authors: Abdul Lateef Haroon P. S., U. Eranna
10.5120/cae2018652765

Abdul Lateef Haroon P S. and U Eranna. Insights on Research-based Approaches in Human Activity Recognition System. Communications on Applied Electronics 7(16):23-31, May 2018. BibTeX

@article{10.5120/cae2018652765,
	author = {Abdul Lateef Haroon P. S. and U. Eranna},
	title = {Insights on Research-based Approaches in Human Activity Recognition System},
	journal = {Communications on Applied Electronics},
	issue_date = {May 2018},
	volume = {7},
	number = {16},
	month = {May},
	year = {2018},
	issn = {2394-4714},
	pages = {23-31},
	numpages = {9},
	url = {http://www.caeaccess.org/archives/volume7/number16/812-2018652765},
	doi = {10.5120/cae2018652765},
	publisher = {Foundation of Computer Science (FCS), NY, USA},
	address = {New York, USA}
}

Abstract

There has been increased proliferation of Human Activity Recognition system to be embedded in the different form of sensing technologies. With the faster advancement of novel features in the sensory application, the human activity can be used as a tool to either command the system from remote or could be used to perform sophisticated analysis of human behavior. Since last decade, there has been the volume of literature focusing on leveraging the identification process using different forms of research-based methodologies. However, it is quite evident that there is no benchmarked model and nor a signatory research work in this field that has been observed till date to be standardized among the research community. Hence, this paper investigates the fundamentals, different existing approaches, and loopholes associated with such approaches so that potential and impending problems associated with it can be distinctively explored. The paper contributes to the identification of some of the open research issues which need significant attention.

References

  1. T. B. Moeslund, A. Hilton, and V. Kr¨uger, “A survey of advances in vision-based human motion capture and analysis,” Computer vision and image understanding, vol. 104, no. 2, pp. 90–126, 2006.
  2. S. Mitra and T. Acharya, “Gesture recognition: A survey,” Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, vol. 37, no. 3, pp. 311–324, 2007.
  3. P. Turaga, R. Chellappa, V. S. Subrahmanian, and O. Udrea, “Machine recognition of human activities: A survey,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 18, no. 11, pp. 1473– 1488, 2008.
  4. R. Poppe, “A survey on vision-based human action recognition,” Image and vision computing, vol. 28, no. 6, pp. 976–990, 2010.
  5. A. Jaimes and N. Sebe, “Multimodal human–computer interaction: A survey,” Computer vision and image understanding, vol. 108, no. 1, pp. 116–134, 2007.
  6. J. K. Aggarwal and M. S. Ryoo, “Human activity analysis: A review,” ACM Computing Surveys (CSUR), vol. 43, no. 3, p. 16, 2011.
  7. D. Weinland, R. Ronfard, and E. Boyer, “A survey of vision-based methods for action representation, segmentation and recognition,” Computer Vision and Image Understanding, vol. 115, no. 2, pp. 224–241, 2011.
  8. A. Janoch, S. Karayev, Y. Jia, J. T. Barron, M. Fritz, K. Saenko, and T. Darrell, “A category-level 3d object dataset: Putting the kinect to work,” in Consumer Depth Cameras for Computer Vision. Springer, 2013, pp. 141–165.
  9. J. Parkka, M. Ermes, P. Korpipaa, J. Mantyjarvi, J. Peltola, and I. Korhonen, “Activity classification using realistic data from wearable sensors,” IEEE Trans. Inf. Technol. Biomed., vol. 10, no. 1, pp. 119– 128, 2006.
  10. U. Maurer, A. Smailagic, D. P. Siewiorek, and M. Deisher, “Activity recognition and monitoring using multiple sensors on different body positions,” in Proc. International Workshop on Wearable and Implantable Body Sensor Networks, (Washington, DC, USA), IEEE Computer Society, 2006.
  11. M. Berchtold, M. Budde, D. Gordon, H. Schmidtke, and M. Beigl, “Actiserv: Activity recognition service for mobile phones,” in International Symposium on Wearable Computers, pp. 1–8, 2010.
  12. S. Reddy, M. Mun, J. Burke, D. Estrin, M. Hansen, and M. Srivastava, “Using mobile phones to determine transportation modes,” ACM Trans. Sensor Networks, vol. 6, no. 2, pp. 1–27, 2010.
  13. D. Lara, Oscar & Labrador, Miguel. (2013). A Survey on Human Activity Recognition Using Wearable Sensors. Communications Surveys & Tutorials, IEEE. 15. 1192-1209. 10.1109/SURV.2012.110112.00192.
  14. R. T. Olszewski, C. Faloutsos, and D. B. Dot, Generalized Feature Extraction for Structural Pattern Recognition in Time-Series Data. 2001.
  15. “The Waikato Environment for Knowledge Analysis,” http://www.cs. waikato.ac.nz/ml/weka/.
  16. R. Girshick, J. Shotton, P. Kohli, A. Criminisi, and A. Fitzgibbon, “Efficient regression of general-activity human poses from depth images,” in Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 2011, pp. 415–422.
  17. J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore, “Real-time human pose recognition in parts from single depth images,” Communications of the ACM, vol. 56, no. 1, pp. 116–124, 2013.
  18. J. Nishimura and T. Kuroda, "Multiaxial Haar-Like Feature and Compact Cascaded Classifier for Versatile Recognition," in IEEE Sensors Journal, vol. 10, no. 11, pp. 1786-1795, Nov. 2010.
  19. P. C. Ribeiro, P. Moreno and J. Santos-Victor, "Introducing fuzzy decision stumps in boosting through the notion of neighbourhood," in IET Computer Vision, vol. 6, no. 3, pp. 214-223, May 2012.
  20. O. Yurur, M. Labrador and W. Moreno, "Adaptive and Energy Efficient Context Representation Framework in Mobile Sensing," in IEEE Transactions on Mobile Computing, vol. 13, no. 8, pp. 1681-1693, Aug. 2014.
  21. M. Hasan and A. K. Roy-Chowdhury, "A Continuous Learning Framework for Activity Recognition Using Deep Hybrid Feature Models," in IEEE Transactions on Multimedia, vol. 17, no. 11, pp. 1909-1922, Nov. 2015.
  22. M. Li and H. Leung, "Multiview Skeletal Interaction Recognition Using Active Joint Interaction Graph," in IEEE Transactions on Multimedia, vol. 18, no. 11, pp. 2293-2302, Nov. 2016.
  23. X. Huang and M. Dai, "Indoor Device-Free Activity Recognition Based on Radio Signal," in IEEE Transactions on Vehicular Technology, vol. 66, no. 6, pp. 5316-5329, June 2017.
  24. R. Troelsgaard and L. K. Hansen, "Sequence Classification Using Third-Order Moments," in Neural Computation, vol. 30, no. 1, pp. 216-236, Jan. 2018.
  25. Shotton J, Fitzgibbon A, Cook M, et al. Real-Time Human Pose Recognition in Parts from Single Depth Images[C]// Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR’11): June 21-23, 2011. Colorado Springs, USA, 2011: 1297-1304.
  26. Liu Zicheng, Human Activity Recognition with 2D and 3D Cameras[J]. Progress in Pattern Recognition, Image Analysis, and Applications, 2012, 7441: 37
  27. NI Bingbing, WANG Gang, MOULIN P. RGBDHuDaAct: A Colour-Depth Video Database for Human Daily Activity Recognition[C]// Proceedings of IEEE International Conference on Computer Vision Workshops (ICCV Workshops): November 6-13, 2011. Barcelona, Spain, 2011: 1147-1153
  28. LI Wanqing, ZHANG Zhengyou, LIU Zicheng. Action Recognition Based on a Bag of 3D Points[C]// Proceedings of IEEE International Conference on Computer Vision Workshops (CVPR Workshops): June 13-18, 2010. San Francisco, USA, 2010
  29. WANG Jiang, LIU Zicheng, CHOROWSKI J, et al. Robust 3D Action Recognition with Random Occupancy Patterns[C]// Proceedings of the 12th European Conference on Computer Vision — Volume Part II (ECCV’12): October 7-13, 2012. Florence, Italy, 2012: 872-885
  30. Z. Yang, L. Zicheng and C. Hong, "RGB-Depth feature for 3D human activity recognition," in China Communications, vol. 10, no. 7, pp. 93-103, July 2013.
  31. W. H. O. Ageing and L. C. Unit, WHO global report on falls prevention in older age: World Health Organization, 2008.
  32. K. I. Withanage, I. Lee, R. Brinkworth, S. Mackintosh and D. Thewlis, "Fall Recovery Subactivity Recognition With RGB-D Cameras," in IEEE Transactions on Industrial Informatics, vol. 12, no. 6, pp. 2312-2320, Dec. 2016.
  33. X. Yang and Y. Tian, "Super Normal Vector for Human Activity Recognition with Depth Cameras," in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 5, pp. 1028-1039, May 1 2017.
  34. Y. Hbali, S. Hbali, L. Ballihi and M. Sadgal, "Skeleton-based human activity recognition for elderly monitoring systems," in IET Computer Vision, vol. 12, no. 1, pp. 16-26, 2 2018.
  35. A. Manzi, L. Fiorini, R. Limosani, P. Dario and F. Cavallo, "Two-person activity recognition using skeleton data," in IET Computer Vision, vol. 12, no. 1, pp. 27-35, 2 2018.
  36. O. P. Popoola and K. Wang, "Video-Based Abnormal Human Behavior Recognition—A Review," in IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 6, pp. 865-878, Nov. 2012.
  37. O. Mendels, H. Stern and S. Berman, "User Identification for Home Entertainment Based on Free-Air Hand Motion Signatures," in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 44, no. 11, pp. 1461-1473, Nov. 2014
  38. G. Luo, S. Yang, G. Tian, C. Yuan, W. Hu, S. J. Maybank, Learning human actions by combining global dynamics and local appearance, IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (12) (2014) 2466–2482.
  39. Y. Shan, Z. Zhang, P. Yang, K. Huang, Adaptive slice representation for human action classification, IEEE Transactions on Circuits and Systems for Video Technology 25 (10) (2015) 1624–1636.
  40. W. Guo, G. Chen, Human action recognition via multi-task learning base on spatial-temporal feature, Information Sciences 320 (2015) 418–428.
  41. J. Zheng, Z. Jiang, R. Chellappa, Cross-view action recognition via transferable dictionary learning, IEEE Transactions on Image Processing 25 (6) (2016) 2542–2556.
  42. C. Yuan, B. Wu, X. Li, W. Hu, S. J. Maybank, F. Wang, Fusing r features and local features with context-aware kernels for action recognition, International Journal of Computer Vision 118 (2) (2016) 151–171.
  43. K. Soomro, A. R. Zamir, M. Shah, Ucf101: A dataset of 101 human actions classes from videos in the wild, arXiv preprint arXiv:1212.0402, 2012
  44. H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, T. Serre, Hmdb: a large video database for human motion recognition, in: Proceedings of the IEEE International Conference on Computer Vision, IEEE, 2011, pp. 2556–2563
  45. A.-A. Liu, N. Xu, W.-Z. Nie, Y.-T. Su, Y. Wong, M. Kankanhalli, Benchmarking a multimodal and multiview and interactive dataset for human action recognition, IEEE Transactions on Cybernetics, 2017     
  46. L. Liu, L. Shao, X. Li, K. Lu, Learning spatio-temporal representations for action recognition: A genetic programming approach, IEEE transactions on cybernetics 46 (1) (2016), 158–170.
  47. A.-A. Liu, Y.-T. Su, W.-Z. Nie, M. Kankanhalli, Hierarchical clustering multi-task learning for joint human action grouping and recognition, IEEE transactions on pattern analysis and machine intelligence 39 (1) (2017) 102–114.
  48. J. Wu, Y. Zhang, W. Lin, Good practices for learning to recognize actions using fv and vlad, IEEE transactions on cybernetics 46 (12) (2016) 2978–2990.
  49. X. Peng, L. Wang, X. Wang, Y. Qiao, Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice, Computer Vision and Image nderstanding 150 (2016) 109–125.
  50. J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, A. Blake Real-time human pose recognition in parts from single depth images, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2011, pp. 1297–1304.
  51. J. Shotton, R. Girshick, A. Fitzgibbon, T. Sharp, M. Cook, M. Finocchio, R. Moore, P. Kohli, A. Criminisi, A. Kipman, Efficient human pose estimation from single depth images, IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (12) (2012) 2821–2840
  52. L. Xia, C.-C. Chen, J. Aggarwal, View invariant human action recognition using histograms of 3d joints, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE, 2012, pp. 20–27.
  53. X. Yang, Y. Tian, Super normal vector for activity recognition using depth sequences, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 804–811.
  54. H. Rahmani, A. Mahmood, D. Q. Huynh, A. Mian, Hopc: Histogram of oriented principal components of 3d pointclouds for action recognition, in: Computer Vision-ECCV 2014, Springer, 2014, pp. 742–757.
  55. X. Yang, Y. Tian, Eigenjoints-based action recognition using naive-bayes-nearest-neighbor, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE, 2012, pp. 14–19
  56. J. Wang, Z. Liu, Y. Wu, J. Yuan, Mining actionlet ensemble for action recognition with depth cameras, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 1290–1297
  57. F. Ofli, R. Chaudhry, G. Kurillo, R. Vidal, R. Bajcsy, Sequence of the most informative joints (smij): A new representation for human skeletal action recognition, Journal of Visual Communication and Image Representation 25 (1) (2014) 24–38.
  58. W. Li, Z. Zhang, Z. Liu, Action recognition based on a bag of 3d points, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE, 2010, pp. 9–14.
  59. Zanfir, M., Leordeanu, M., Sminchisescu, C.: ‘The moving pose: an efficient 3d kinematics descriptor for low-latency action recognition and detection’. Proc. of the IEEE Int. Conf. Computer Vision, 2013, pp. 2752–2759.
  60. Ben Tamou, A., Ballihi, L., Aboutajdine, D.: ‘Automatic learning of articulated skeletons based on mean of 3d joints for efficient action recognition’, Int. J. Pattern Recogn. Artif. Intell., 2017, 31, (04), p. 1750008
  61. Keceli, A.S., Can, A.B.: ‘Recognition of basic human actions using depth information’, Int. J. Pattern Recogn. Artif. Intell., 2014, 28, (02), p. 1450004
  62. Cal, X., Zhou, W., Wu, L., et al.: ‘Effective active skeleton representation for low latency human action recognition’, IEEE Trans. Multimed., 2016, 18, (2), pp. 141–154.
  63. Hussein, M.E., Torki, M., Gowayyed, M.A., et al.: ‘Human action recognition using a temporal hierarchy of covariance descriptors on 3d joint locations’, IJCAI, 2013, 13, pp. 2466–2472.
  64. Xia, L., Aggarwal, J.: ‘Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera’. Proc. of the IEEE Conf. Computer Vision and Pattern Recognition, 2013, pp. 2834–2841.
  65. Vieira, A., Nascimento, E., Oliveira, G., et al.: ‘Stop: space-time occupancy patterns for 3d action recognition from depth map sequences’. Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 2012, pp. 252–259.
  66. Oreifej, O., Liu, Z.: ‘Hon4d: histogram of oriented 4d normals for activity recognition from depth sequences’. Proc. of the IEEE Conf. Computer Vision and Pattern Recognition, 2013, pp. 716–723.
  67. Yang, X., Zhang, C., Tian, Y.: ‘Recognizing actions using depth motion mapsbased histograms of oriented gradients’. Proc. of the 20th ACM Int. Conf. Multimedia, 2012, pp. 1057–1060.
  68. Zhao, Y., Liu, Z., Yang, L., et al.: ‘Combing rgb and depth map features for human activity recognition’. Signal & Information Processing Association Annual Summit and Conf. (APSIPA ASC), 2012 Asia-Pacific, 2012, pp. 1–4.
  69. Amor, B.B., Su, J., Srivastava, A.: ‘Action recognition using rate-invariant analysis of skeletal shape trajectories’, IEEE Trans. Pattern Anal. Mach. Intell., 2016, 38, (1), pp. 1–13.
  70. Yu, G., Liu, Z., Yuan, J.: ‘Discriminative orderlet mining for real-time recognition of human-object interaction’. Asian Conf. Computer Vision, 2014, pp. 50–65.
  71. Chaaraoui, A., Padilla-Lopez, J., Flórez-Revuelta, F.: ‘Fusion of skeletal and silhouette-based features for human action recognition with rgb-d devices’. Proc. of the IEEE Int. Conf. Computer Vision Workshops, 2013, pp. 91–97.

Keywords

Human Activity Recognition, Motion Sensing, Action, identification Accuracy