Jurnal Teknik Informatika C.I.T Medicom
https://www.medikom.iocspublisher.org/index.php/JTI
<p style="text-align: justify;"><img src="https://medikom.iocspublisher.org/public/site/images/gerhard/editor-review.png" alt="" />The Jurnal Teknik Informatika C.I.T Medicom a scientific journal of Decision support sistem, expert system and artificial inteligens which includes scholarly writings on pure research and applied research in the field of information systems and information technology as well as a review-general review of the development of the theory, methods, and related applied sciences.</p> <table style="border-collapse: collapse; width: 100%;" border="0"> <tbody> <tr> <td style="width: 50%;"> <ol> <li>Expert systems</li> <li>Decision Support System</li> <li>Datamining</li> <li>Artificial Intelligence</li> <li><a href="https://medikom.iocspublisher.org/index.php/JTI/scope">See Scope for more details...</a></li> </ol> </td> <td style="width: 50%;"> <p><span style="color: #ff0000;"><strong>CALL FOR PAPER</strong></span></p> <p><span style="color: #339966;"><strong>Volume 17, No 4, (2025)</strong></span><br /><strong>Submit Deadline</strong>: Sep 30, 2025<br /><strong>Published</strong>: Sep 30, 2025<br /><span style="color: #ff0000;"><strong>APC: FREE</strong></span><br /><a href="https://medikom.iocspublisher.org/index.php/JTI/user/register" target="_blank" rel="noopener"><strong>Klik For Submit</strong></a></p> </td> </tr> </tbody> </table> <p align="justify"><strong>Frekuensi : </strong><em>(January, March, May, July, September, and November).</em></p> <p align="justify"><strong>Acceptance Ratio:</strong></p> <table width="100%"> <tbody> <tr> <td bgcolor="#F0F8FF"><strong>Volume 17 Issue 1 (2024)</strong></td> <td bgcolor="#F0F8FF"><strong>47%</strong></td> </tr> <tr> <td bgcolor="#F0F8FF"><strong>Volume 16 Issue 6 (2023)</strong></td> <td bgcolor="#F0F8FF"><strong>20.94%</strong></td> </tr> <tr> <td bgcolor="#F5F5DC"><strong>Volume 16 Issue 5 (2022)</strong></td> <td bgcolor="#F5F5DC"><strong>18%</strong></td> </tr> <tr> <td bgcolor="#F0F8FF"><strong>Over All (Vol 1-16)</strong></td> <td bgcolor="#F0F8FF"><strong>18% </strong></td> </tr> </tbody> </table> <table style="border-collapse: collapse; width: 100%;" border="1"> <tbody> <tr> <td style="width: 43.6097%;">Citation Analysis :</td> <td style="width: 56.3903%;"><a href="https://medikom.iocspublisher.org/index.php/JTI/SCOPUS"><img src="https://jurnal.polgan.ac.id/public/site/images/polgan/scopus1.jpg" /></a> <a href="https://scholar.google.co.id/citations?hl=id&authuser=5&user=vB5ZokUAAAAJ"><img src="https://jurnal.polgan.ac.id/public/site/images/polgan/google1.jpg" /></a> <a href="https://sinta.kemdikbud.go.id/journals/detail?id=6844"><img src="https://jurnal.polgan.ac.id/public/site/images/polgan/sinta1.jpg" /></a></td> </tr> </tbody> </table>Institute of Computer Science (IOCS)en-USJurnal Teknik Informatika C.I.T Medicom2337-8646Real Time Pill Counting on Low Power Device: A YOLOv5 Pipeline with Confidence Thresholding and NMS
https://www.medikom.iocspublisher.org/index.php/JTI/article/view/1286
<p>Manual pill counting is still commonly performed in healthcare facilities and pharmacies, but this method is vulnerable to human error and requires significant processing time. This study develops an automatic pill counting pipeline using the YOLOv5 deep learning model, optimized for low-power devices such as Raspberry Pi, Orange Pi, and Jetson Nano. Unlike earlier techniques that depend on conventional retrieval or machine-learning approaches, this pipeline integrates real-time object detection with customized confidence thresholding and Non-Maximum Suppression (NMS), enabling high accuracy and fast performance on edge hardware with limited resources. The development process includes collecting and annotating a dataset of pill images with variations in shape, color, and orientation, followed by training YOLOv5 using optimized parameters. A simple webcam is used as the input device, and system performance is evaluated under different lighting and background conditions. Experimental results show that the model achieves 98% precision, 88% recall, 95% mAP@0.5, and 67% mAP@0.5:0.95, with an average inference speed of around 15 milliseconds per image. Tests on ten pill-counting scenarios under optimal lighting demonstrate strong performance, with only minor discrepancies in dense cases involving 50 and 127 pills, producing accuracies of 98% and 99.21%. These results indicate that the optimized YOLOv5 pipeline provides fast and accurate real-time pill counting on low-power devices. Future work will enhance robustness to lighting variations, validate using external datasets, and incorporate color and shape feature analysis to improve performance in challenging scenarios.</p>Galih Prakoso Rizky ARifka Widyastuti
Copyright (c) 2025 Galih Prakoso Rizky A, Rifka Widyastuti
https://creativecommons.org/licenses/by-nc/4.0
2025-11-302025-11-3017523024610.35335/cit.Vol17.2025.1286.pp225-241OPTIMIZATION OF CONVOLUTIONAL NEURAL NETWORK ALGORITHMIC ACCURACY FOR THE IDENTIFICATION OF DIFFERENT FONT TYPES
https://www.medikom.iocspublisher.org/index.php/JTI/article/view/1274
<p><em>Text not only conveys the message through the words used, but also through its visual aspects. One of the most influential visual elements is the type of font. Recognising and determining font types appropriately is essential, whether in the academic sector, the printing industry, graphic design, or digital systems. However, in practice, manually recognising font types takes time, skill, and high precision. With the advancement of digital technology, the variety of font types is increasing, making the process of identifying fonts more complicated. This requires the development of methods that are able to distinguish different types of fonts precisely and accurately. This study reveals the potential of Convolutional Neural Network (CNN) algorithms as an optimisation in facing font identification challenges, as well as to prove that deep learning can provide more efficient and precise solutions, by comparing three different CNN architectures, namely DenseNet121, ResNet50, and VGG16. The implementation of the method is carried out by applying data augmentation techniques and setting CNN parameters such as the number of epochs, learning rate, batch size, Adam optimiser, and image size. The results showed that the DenseNet121 model achieved an accuracy of up to 96.8%, ResNet50 92.9%, and VGG16 96.4%. The convolutional neural network algorithm proves that it can identify various font types with optimal accuracy.</em></p>Bulkis KanataMisbahuddinM. Rafif Akhdan
Copyright (c) 2025 Bulkis Kanata, Misbahuddin, M. Rafif Akhdan
https://creativecommons.org/licenses/by-nc/4.0
2025-11-302025-11-30175247256Determining Initial Centroid in K-Means using Global Average and Data Dimension Variance
https://www.medikom.iocspublisher.org/index.php/JTI/article/view/1083
<p><em>The selection of the right initial centroid greatly affects the quality of clustering results in the K-Means algorithm. This study proposes a new approach in determining the initial centroid by utilizing the global average and variance of data dimensions. The global average is used to represent the overall center position of the data, while the variance of dimensions provides information on the distribution of each feature. This method is tested using three-dimensional synthetic data (X, Y, Z) with 121 data, and compared with the random initialization approach. The results show that the global average and variance-based method produces more balanced clusters, lower Sum of Squared Error (SSE) values, and the highest Silhouette Score value (0.65), as well as faster convergence. Compared to two random initialization scenarios, this method is proven to be more stable in separating clusters based on the distribution of low, medium, and high values. This approach makes an important contribution to the development of a more consistent and effective K-Means initialization strategy, especially for low to medium-dimensional numerical datasets.</em></p>Efori Bu'ulolo
Copyright (c) 2025 Efori Bu'ulolo
https://creativecommons.org/licenses/by-nc/4.0
2025-11-302025-11-30175257268