Elmer Pamisa Dadios
De La Salle University

Published : 4 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 4 Documents
Search

Lettuce life stage classification from texture attributes using machine learning estimators and feature selection processes Sandy Cruz Lauguico; Ronnie II Sabino Concepcion; Jonnel Dorado Alejandrino; Rogelio Ruzcko Tobias; Elmer Pamisa Dadios
International Journal of Advances in Intelligent Informatics Vol 6, No 2 (2020): July 2020
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v6i2.466

Abstract

Classification of lettuce life or growth stages is an effective tool for measuring the performance of an aquaponics system. It determines the balance in water nutrients, adequate temperature and lighting, other environmental factors, and the system’s productivity to sustain cultivars. This paper proposes a classification of lettuce life stages planted in an aquaponics system. The classification was done using the texture features of the leaves derived from machine vision algorithms. The attributes underwent three different feature selection processes, namely: Univariate Selection (US), Recursive Feature Elimination (RFE), and Feature Importance (FI) to determine the four most significant features from the original eight attributes. The features selected were used for training four estimators from Decision Trees Classifier (DTC), Gaussian Naïve Bayes (GNB), Stochastic Gradient Descent (SGD), and Linear Discriminant Analysis (LDA). The models trained using DTC and SGD were then optimized as they have hyperparameters for tuning. A comparative analysis among Machine Learning (ML) algorithms was conducted to identify the best-performing model with the given application. The best features were derived from US and FI as they have the same top four features using the DTC estimator optimized with the hyperparameters tuned to max depth having 5, criterion equated to ‘Gini', and splitter was set to 'Best'. The accuracy obtained from cross-validation evaluation resulted in 87.92%. Considering consistency with hold-out validation, LDA outperforms optimized DTC even with lower accuracy of 86.67%. This accuracy of LDA outperformed DTC due to its sufficient fit for generalizing the testing data on classifying lettuce growth stage.
Trophic state assessment using hybrid classification tree-artificial neural network Ronnie Sabino Concepcion II; Pocholo James Mission Loresco; Rhen Anjerome Rañola Bedruz; Elmer Pamisa Dadios; Sandy Cruz Lauguico; Edwin Sybingco
International Journal of Advances in Intelligent Informatics Vol 6, No 1 (2020): March 2020
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v6i1.408

Abstract

The trophic state is one of the significant environmental impacts that must be monitored and controlled in any aquatic environment. This phenomenon due to nutrient imbalance in water strengthened with global warming, inhibits the natural system to progress. With eutrophication, the mass of algae in the water surface increases and results to lower dissolved oxygen in the water that is essential for fishes. Numerous limnological and physical features affect the trophic state and thus require extensive analysis to asses it. This paper proposed a model of hybrid classification tree-artificial neural network (CT-ANN) to assess the trophic state based on the selected significant features. The classification tree was used as a multidimensional reduction technique for feature selection, which eliminates eight original features. The remaining predictors having high impacts are chlorophyll-a, phosphorus and Secchi depth. The two-layer ANN with 20 artificial neurons was constructed to assess the trophic state of input features. The neural network was modeled based on the key parameters of learning time, cross-entropy, and regression coefficient. The ANN model used to assess trophic state based on 11 predictors resulted in 81.3% accuracy. The modeled hybrid classification tree-ANN based on 3 predictors resulted to 88.8% accuracy with a cross-entropy performance of 0.096495. Based on the obtained result, the modeled hybrid classification tree-ANN provides higher accuracy in assessing the trophic state of the aquaponic system.
Lettuce growth stage identification based on phytomorphological variations using coupled color superpixels and multifold watershed transformation Ronnie Sabino Concepcion II; Jonnel Dorado Alejandrino; Sandy Cruz Lauguico; Rogelio Ruzcko Tobias; Edwin Sybingco; Elmer Pamisa Dadios; Argel Alejandro Bandala
International Journal of Advances in Intelligent Informatics Vol 6, No 3 (2020): November 2020
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v6i3.435

Abstract

Identifying the plant's developmental growth stages from seed leaf is crucial to understand plant science and cultivation management deeply. An efficient vision-based system for plant growth monitoring entails optimum segmentation and classification algorithms. This study presents coupled color-based superpixels and multifold watershed transformation in segmenting lettuce plant from complicated background taken from smart farm aquaponic system, and machine learning models used to classify lettuce plant growth as vegetative, head development and for harvest based on phytomorphological profile. Morphological computations were employed by feature extraction of the number of leaves, biomass area and perimeter, convex area, convex hull area and perimeter, major and minor axis lengths of the major axis length the dominant leaf, and length of plant skeleton. Phytomorphological variations of biomass compactness, convexity, solidity, plant skeleton, and perimeter ratio were included as inputs of the classification network. The extracted Lab color space information from the training image set undergoes superpixels overlaying with 1,000 superpixel regions employing K-means clustering on each pixel class. Six-level watershed transformation with distance transformation and minima imposition was employed to segment the lettuce plant from other pixel objects. The accuracy of correctly classifying the vegetative, head development, and harvest growth stages are 88.89%, 86.67%, and 79.63%, respectively. The experiment shows that the test accuracy rates of machine learning models were recorded as 60% for LDA, 85% for ANN, and 88.33% for QSVM. Comparative analysis showed that QSVM bested the performance of optimized LDA and ANN in classifying lettuce growth stages. This research developed a seamless model in segmenting vegetation pixels, and predicting lettuce growth stage is essential for plant computational phenotyping and agricultural practice optimization.
Identifying threat objects using faster region-based convolutional neural networks (faster R-CNN) Reagan Galvez; Elmer Pamisa Dadios
International Journal of Advances in Intelligent Informatics Vol 8, No 3 (2022): November 2022
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v8i3.952

Abstract

Automated detection of threat objects in a security X-ray image is vital to prevent unwanted incidents in busy places like airports, train stations, and malls. The manual method of threat object detection is time-consuming and tedious. Also, the person on duty can overlook the threat objects due to limited time in checking every person’s belongings. As a solution, this paper presents a faster region-based convolutional neural network (Faster R-CNN) object detector to automatically identify threat objects in an X-ray image using the IEDXray dataset. The dataset was composed of scanned X-ray images of improvised explosive device (IED) replicas without the main charge. This paper extensively evaluates the Faster R-CNN architecture in threat object detection to determine which configuration can be used to improve the detection performance. Our findings showed that the proposed method could identify three classes of threat objects in X-ray images. In addition, the mean average precision (mAP) of the threat object detector could be improved by increasing the input image's image resolution but sacrificing the detector's speed. The threat object detector achieved 77.59% mAP and recorded an inference time of 208.96 ms by resizing the input image to 900 × 1536 resolution. Results also showed that increasing the bounding box proposals did not significantly improve the detection performance. The mAP using 150 bounding box proposals only achieved 75.65% mAP, and increasing the bounding box proposal twice reduced the mAP to 72.22%.