Claim Missing Document
Check
Articles

Found 1 Documents
Search
Journal : Eduma : Mathematics Education Learning and Teaching

Higher Order Thinking Skills (HOTS) Test Instrument: Validity and Reliability Analysis With The Rasch Model Rivo Panji Yudha
Eduma : Mathematics Education Learning and Teaching Vol 12, No 1 (2023)
Publisher : Jurusan Tadris Matematika IAIN Syekh Nurjati Cirebon

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.24235/eduma.v12i1.9468

Abstract

This article describes the instruments that can measure students' higher-order thinking skills in learning mathematics for low, medium, and high difficulty levels. This study aims to estimate the validity and determine the reliability of the higher-order thinking test assessment instrument in Mathematics learning based on the Rasch model. The research was conducted through a quantitative descriptive approach in two SMP Negeri 1 Gebang, which had 100 respondents. The research development model used is the ADDIE model. This article describes the stages of development. The instrument used is a high-level mathematical thinking test assessment instrument which contains 20 questions and expert validation observation sheets. Mathematical higher-order thinking test questions were presented to three material experts. The validity used is content validity and construct validity. Reliability was tested through the Rasch model approach. The results showed that the HOTS assessment instrument in the form of HOTS test questions consisting of 20 items of description from the material, construction, and linguistic aspects and the appearance was declared to be constructively valid and suitable for use. The validity results can be seen and analyzed with the Winsteps program in the Out fit order table to see the suitability of items that function in the normal category to measure student misconceptions. The results showed the content validity through the Rasch model approach using Winstep software, where the peak of the graph approached the value 0, the item reliability value was 0.88, the item separation was 1.16, and the response separation was 2.65. The 90% of test questions with moderate criteria