Real Time Pill Counting on Low Power Device: A YOLOv5 Pipeline with Confidence Thresholding and NMS

Authors

  • Galih Prakoso Rizky A Universitas Pembangunan Nasional Veteran Jakarta, Indonesia
  • Rifka Widyastuti Universitas Pembangunan Nasional Veteran Jakarta, Indonesia

DOI:

https://doi.org/10.35335/cit.Vol17.2025.1286.pp225-241

Keywords:

Computer vision, Deep learning, Object detection, Pill counter, Yolov5

Abstract

Manual pill counting is still commonly performed in healthcare facilities and pharmacies, but this method is vulnerable to human error and requires significant processing time. This study develops an automatic pill counting pipeline using the YOLOv5 deep learning model, optimized for low-power devices such as Raspberry Pi, Orange Pi, and Jetson Nano. Unlike earlier techniques that depend on conventional retrieval or machine-learning approaches, this pipeline integrates real-time object detection with customized confidence thresholding and Non-Maximum Suppression (NMS), enabling high accuracy and fast performance on edge hardware with limited resources. The development process includes collecting and annotating a dataset of pill images with variations in shape, color, and orientation, followed by training YOLOv5 using optimized parameters. A simple webcam is used as the input device, and system performance is evaluated under different lighting and background conditions. Experimental results show that the model achieves 98% precision, 88% recall, 95% mAP@0.5, and 67% mAP@0.5:0.95, with an average inference speed of around 15 milliseconds per image. Tests on ten pill-counting scenarios under optimal lighting demonstrate strong performance, with only minor discrepancies in dense cases involving 50 and 127 pills, producing accuracies of 98% and 99.21%. These results indicate that the optimized YOLOv5 pipeline provides fast and accurate real-time pill counting on low-power devices. Future work will enhance robustness to lighting variations, validate using external datasets, and incorporate color and shape feature analysis to improve performance in challenging scenarios.

Downloads

Download data is not yet available.

References

I. S. Um, A. Clough, and E. C. K. Tan, “Dispensing error rates in pharmacy: A systematic review and meta-analysis,” Res. Soc. Adm. Pharm., vol. 20, no. 1, pp. 1–9, 2024, doi: https://doi.org/10.1016/j.sapharm.2023.10.003.

J. Bonkowski et al., “Effect of barcode?assisted medication administration on emergency department medication errors,” Acad. Emerg. Med., vol. 20, no. 8, pp. 801–806, 2013, doi: https://doi.org/10.1111/acem.12189.

Y. Zheng et al., “Designing human-centered AI to prevent medication dispensing errors: focus group study with pharmacists,” JMIR Form. Res., vol. 7, no. 1, p. e51921, 2023, doi: https://doi.org/10.2196/51921.

B. Orkaby, E. Kerner, M. Saban, and C. Levin, “Bridging generational gaps in medication safety: insights from nurses, students, and generative AI models,” BMC Nurs., vol. 24, no. 1, p. 382, 2025, doi: https://doi.org/10.1186/s12912-025-03034-8.

O. Tchijevitch et al., “Methodological Approaches for Analyzing Medication Error Reports in Patient Safety Reporting Systems: A Scoping Review,” Jt. Comm. J. Qual. Patient Saf., vol. 51, no. 1, pp. 46–73, 2025, doi: https://doi.org/10.1016/j.jcjq.2024.10.005.

H.-J. Kwon, H.-G. Kim, and S.-H. Lee, “Pill detection model for medicine inspection based on deep learning,” Chemosensors, vol. 10, no. 1, p. 4, 2021, doi: https://doi.org/10.3390/chemosensors10010004.

D. Upadhyay, M. Manwal, V. Kukreja, and R. Sharma, “A Fine-Tuned YOLOv5 and Exception Model for Oral Cancer Detection,” in 2024 5th International Conference for Emerging Technology (INCET), IEEE, 2024, pp. 1–5. doi: https://doi.org/10.1109/INCET61516.2024.10592942.

K. Al-Hussaeni, I. Karamitsos, E. Adewumi, and R. M. Amawi, “CNN-based pill image recognition for retrieval systems,” Appl. Sci., vol. 13, no. 8, p. 5050, 2023, doi: https://doi.org/10.3390/app13085050.

A. D. Nguyen, H. H. Pham, H. T. Trung, Q. V. H. Nguyen, T. N. Truong, and P. Le Nguyen, “High accurate and explainable multi-pill detection framework with graph neural network-assisted multimodal data fusion,” PLoS One, vol. 18, no. 9, p. e0291865, 2023, doi: https://doi.org/10.1371/journal.pone.0291865.

ultralytics, “Ultralytics YOLO Docs,” ultralytics.

S. Feng, H. Qian, H. Wang, and W. Wang, “Real-time object detection method based on YOLOv5 and efficient mobile network,” J. Real-Time Image Process., vol. 21, no. 2, p. 56, 2024, doi: https://doi.org/10.1007/s11554-024-01433-9.

S. Akshaya, A. C. Uthaman, and S. Sridhar, “Detection and Identification of Pills using Machine Learning Models,” in 2023 2nd International Conference on Vision Towards Emerging Trends in Communication and Networking Technologies (ViTECoN), IEEE, 2023, pp. 1–6. doi: https://doi.org/10.1109/ViTECoN58111.2023.10157873.

W. Sun, X. Niu, Z. Wu, and Z. Guo, “Lightweight Detection Counting Method for Pill Boxes Based on Improved YOLOv8n.,” Electron., vol. 13, no. 24, 2024, doi: https://doi.org/10.3390/electronics13244953.

S. J. Kim and D. S. Cho, “Medical-Pills Detection Using YOLOv11: A Proof of Concept Study for Pharmaceutical Automation,” vol. 2, no. 4, p. 1, 2025, doi: https://doi.org/10.21203/rs.3.rs-6337589/v2.

L. Tan, T. Huangfu, L. Wu, and W. Chen, “Comparison of RetinaNet, SSD, and YOLO v3 for real-time pill identification,” BMC Med. Inform. Decis. Mak., vol. 21, no. 1, p. 324, 2021, doi: https://doi.org/10.1186/s12911-021-01691-8.

S.-H. Lee, D.-M. Son, and S.-H. Lee, “Enhanced Multi-Pill Detection and Recognition Using VFI Augmentation and Auto-Labeling for Limited Single-Pill Data,” in IEEE Access, IEEE, 2025, pp. 60859–60878. doi: https://doi.org/10.1109/ACCESS.2025.3557569.

Q. Huang, Y. Zhou, T. Yang, K. Yang, L. Cao, and Y. Xia, “A lightweight transfer learning model with pruned and distilled YOLOv5s to identify arc magnet surface defects,” Appl. Sci., vol. 13, no. 4, p. 2078, 2023, doi: https://doi.org/10.3390/app13042078.

J. Zhou, T. Su, K. Li, and J. Dai, “Small target-YOLOv5: enhancing the algorithm for small object detection in drone aerial imagery based on YOLOv5,” Sensors, vol. 24, no. 1, p. 134, 2023, doi: https://doi.org/10.3390/s24010134.

C. Liu, K. Wang, Q. Li, F. Zhao, K. Zhao, and H. Ma, “Powerful-IoU: More straightforward and faster bounding box regression loss with a nonmonotonic focusing mechanism,” Neural Networks, vol. 170, no. 2, pp. 276–284, 2024, doi: https://doi.org/10.1016/j.neunet.2023.11.041.

L. Yang, K. Zhang, J. Liu, and C. Bi, “Location IoU: A New Evaluation and Loss for Bounding Box Regression in Object Detection,” in 2024 International Joint Conference on Neural Networks (IJCNN), IEEE, 2024, pp. 1–8. doi: https://doi.org/10.1109/IJCNN60899.2024.10649985.

H. Xu et al., “Rethinking boundary discontinuity problem for oriented object detection,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 17406–17415. doi: https://doi.org/10.48550/arXiv.2305.10061.

Y. J. Hwang, G. H. Kim, M. J. Kim, and K. W. Nam, “Deep learning-based monitoring technique for real-time intravenous medication bag status,” Biomed. Eng. Lett., vol. 13, no. 4, pp. 705–714, 2023, doi: https://doi.org/10.1007/s13534-023-00292-w.

R. Khanam and M. Hussain, “What is YOLOv5: A deep look into the internal features of the popular object detector,” arXiv Prepr. arXiv2407.20892, vol. 128, no. 7, p. 4012, 2024, doi: https://doi.org/10.48550/arXiv.2407.20892.

J. Heo, Y. Kang, S. Lee, D.-H. Jeong, and K.-M. Kim, “An accurate deep learning–based system for automatic pill identification: Model development and validation,” J. Med. Internet Res., vol. 25, no. 7, p. e41043, 2023, doi: https://doi.org/10.2196/41043.

B. Dang, W. Zhao, Y. Li, D. Ma, Q. Yu, and E. Y. Zhu, “Real-time pill identification for the visually impaired using deep learning,” in 2024 6th International Conference on Communications, Information System and Computer Engineering (CISCE), IEEE, 2024, pp. 552–555. doi: https://doi.org/10.1109/CISCE62493.2024.10653353.

J. Gilg, T. Teepe, F. Herzog, P. Wolters, and G. Rigoll, “Do we still need non-maximum suppression? Accurate confidence estimates and implicit duplication modeling with IoU-aware calibration,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 4850–4859. doi: https://doi.org/10.48550/arXiv.2309.03110.

K.-S. Si et al., “Accelerating Non-Maximum Suppression: A Graph Theory Perspective,” Adv. Neural Inf. Process. Syst., vol. 37, no. 11, pp. 121992–122028, 2024, doi: https://doi.org/10.48550/arXiv.2409.20520.

S. K. Jaiswal and R. Agrawal, “A comprehensive review of YOLOv5: advances in real-time object detection,” Int. J. Innov. Res. Comput. Sci. Technol, vol. 12, no. 3, pp. 75–80, 2024, doi: https://doi.org/10.55524/ijircst.2024.12.3.12.

X. Song and W. Gu, “Multi-objective real-time vehicle detection method based on yolov5,” in 2021 International Symposium on Artificial Intelligence and its Application on Media (ISAIAM), IEEE, 2021, pp. 142–145. doi: https://doi.org/10.1109/ISAIAM53259.2021.00037.

R. Jing, W. Zhang, Y. Liu, W. Li, Y. Li, and C. Liu, “An effective method for small object detection in low-resolution images,” Eng. Appl. Artif. Intell., vol. 127, no. 1, p. 107206, 2024, doi: https://doi.org/10.1016/j.engappai.2023.107206.

Downloads

Published

2025-11-30

How to Cite

A, G. P. R., & Widyastuti, R. (2025). Real Time Pill Counting on Low Power Device: A YOLOv5 Pipeline with Confidence Thresholding and NMS. Jurnal Teknik Informatika C.I.T Medicom, 17(5), 230–246. https://doi.org/10.35335/cit.Vol17.2025.1286.pp225-241

Issue

Section

OPTIMIZATION AND ARTIFICIAL INTELLIGENCE