Irfansyah, Astria Nur
Institut Teknologi Sepuluh Nopember

Published : 4 Documents
Articles

Found 4 Documents
Search

Inclined Image Recognition for Aerial Mapping using Deep Learning and Tree based Models Attamimi, Muhammad; Mardiyanto, Ronny; Irfansyah, Astria Nur
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 16, No 6: December 2018
Publisher : Universitas Ahmad Dahlan

Show Abstract | Original Source | Check in Google Scholar | Full PDF (1677.168 KB)

Abstract

One of the important capabilities of an unmanned aerial vehicle (UAV) is aerial mapping. Aerial mapping is an image registration problem, i.e., the problem of transforming different sets of images into one coordinate system. In image registration, the quality of the output is strongly influenced by the quality of input (i.e., images captured by the UAV). Therefore, selecting the quality of input images becomes important and one of the challenging task in aerial mapping because the ground truth in the mapping process is not given before the UAV flies. Typically, UAV takes images in sequence irrespective of its flight orientation and roll angle. These may result in the acquisition of bad quality images, possibly compromising the quality of mapping results, and increasing the computational cost of a registration process. To address these issues, we need a recognition system that is able to recognize images that are not suitable for the registration process. In this paper, we define these unsuitable images as “inclined images,” i.e., images captured by UAV that are not perpendicular to the ground. Although we can calculate the inclination angle using a gyroscope attached to the UAV, our interest here is to recognize these inclined images without the use of additional sensors in order to mimic how humans perform this task visually. To realize that, we utilize a deep learning method with the combination of tree-based models to build an inclined image recognition system. We have validated the proposed system with the images captured by the UAV. We collected 192 images and labelled them with two different levels of classes (i.e., coarse- and fine-classification). We compared this with several models and the results showed that our proposed system yielded an improvement of accuracy rate up to 3%.
Sistem Autodocking Mobile Robot Berbasis Suara Untuk Pengisian Ulang Baterai Hidayanto, Ari; Rivai, Muhammad; Irfansyah, Astria Nur
Jurnal Teknik ITS Vol 7, No 2 (2018)
Publisher : Lembaga Penelitian dan Pengabdian Kepada Masyarakat (LPPM), ITS

Show Abstract | Original Source | Check in Google Scholar | Full PDF (256.04 KB)

Abstract

Siklus kerja autonomous mobile robot dirancang penuh untuk mampu berjalan dan bekerja secara kontinyu dengan lintasan yang sudah ditentukan oleh waypoint Global Positioning System (GPS) tanpa ada campur tangan manusia. Salah satu sistem yang cukup penting pada mobile robot untuk masalah tersebut adalah sistem autodocking yang digunakan ketika baterai dalam level rendah atau akan habis supaya kembali ke power station untuk mengisi ulang baterai. Dalam proses menuju power station diperlukan keakuratan mobile robot agar posisi antara transmitter dengan receiver baterai berada pada jarak transfer Wireless Power Transmision (WPT). Penelitian ini mengimplementasikan mobile robot yang dirancang menggunakan Arduino Mega 2560 sebagai kontrol utama sistem, GPS Ublox Neo M8N, modul kompas HMC5883L, modul mikrofon kondenser, modul HC-SR04, motor driver L298N dengan 4 buah motor DC, dan LCD. Sedangkan untuk sumber suara menggunakan speaker Polytron Muze (PSP B1) dengan suara sonar yang memiliki rentang frekuensi 900 Hz hingga 1100 Hz. Modul mikrofon ditempatkan pada sisi kanan dan kiri mobile robot dengan metode mencari arah sumber suara pada WPT berdasarkan different level intensity. Hasil pengujian penelitian ini menunjukkan mobile robot yang dirancang memiliki kemampuan menuju titik waypoint dengan error jarak posisi mencapai 6 meter. Penggunaan corong pengarah pada sensor suara memiliki efektifitas sudut directivity 90° dan -90° membuat sensor suara menjadi lebih sensitif. Sedangkan mobile robot dapat mendeteksi suara dan menuju ke sumber suara dengan efektifitas jarak kurang dari 100 cm yang memiliki waktu tempuh kurang dari 60 detik.
Unmanned Surface Vehicle Untuk Mencari Lokasi Tumpahan Minyak Menggunakan Ardupilot Mega Permana, Dedy; Rivai, Muhammad; Irfansyah, Astria Nur
Jurnal Teknik ITS Vol 7, No 2 (2018)
Publisher : Lembaga Penelitian dan Pengabdian Kepada Masyarakat (LPPM), ITS

Show Abstract | Original Source | Check in Google Scholar | Full PDF (377.123 KB)

Abstract

Minyak adalah sumber energi yang sering digunakan dalam dunia industri. Akhir-akhir ini penggunaan minyak meningkat yang dapat menyebabkan meningkatnya polusi minyak di perairan akibat dari tumpahan minyak mentah. Meningkatnya polusi minyak ini menjadi perhatian yang cukup serius dibidang lingkungan hidup. Akibat adanya polusi minyak ini menyebabkan sektor pariwisata kelautan menjadi tidak optimal dan dapat mencemari lingkungan laut. Pada penelitian ini telah dibuat suatu alat untuk mendeteksi tumpahan minyak di perairan berupa Unmanned Surface Vehicle (USV) yang dapat menyisir area dengan waypoint tertentu untuk mendeteksi lokasi dari tumpahan minyak. USV ini dilengkapi dengan sensor berupa sensor resistif yang digunakan untuk mendeteksi adanya tumpahan minyak di perairan. Agar USV dapat berjalan secara autonomous, maka digunakan Global Positioning System (GPS) untuk mengetahui keberadaan USV dan lokasi tumpahan minyak. GPS yang digunakan adalah tipe ublox neo 6m. IMU sensor MPU6000 digunakan untuk mengetahui arah dan kecepatan dari USV. Untuk mikrokontroler yang digunakan adalah Ardupilot Mega karena memiliki fitur yang meliputi sensor barometer, akselerometer, gyrometer dan magnetometer. USV dapat mendeteksi lokasi tumpahan minyak dengan metode waypoint yang diberikan pada ardupilot mega dengan ketebalan minyak minimal 3mm dan maksimal 40mm. Jarak antara lokasi yang dikirim GPS dengan lokasi sesungguhnya bergeser sejauh 30cm. USV ini mampu mengirimkan data-data tersebut secara real time.
Inclined Image Recognition for Aerial Mapping using Deep Learning and Tree based Models Attamimi, Muhammad; Mardiyanto, Ronny; Irfansyah, Astria Nur
TELKOMNIKA (Telecommunication Computing Electronics and Control) Vol 16, No 6: December 2018
Publisher : Universitas Ahmad Dahlan

Show Abstract | Original Source | Check in Google Scholar

Abstract

One of the important capabilities of an unmanned aerial vehicle (UAV) is aerial mapping. Aerial mapping is an image registration problem, i.e., the problem of transforming different sets of images into one coordinate system. In image registration, the quality of the output is strongly influenced by the quality of input (i.e., images captured by the UAV). Therefore, selecting the quality of input images becomes important and one of the challenging task in aerial mapping because the ground truth in the mapping process is not given before the UAV flies. Typically, UAV takes images in sequence irrespective of its flight orientation and roll angle. These may result in the acquisition of bad quality images, possibly compromising the quality of mapping results, and increasing the computational cost of a registration process. To address these issues, we need a recognition system that is able to recognize images that are not suitable for the registration process. In this paper, we define these unsuitable images as “inclined images,” i.e., images captured by UAV that are not perpendicular to the ground. Although we can calculate the inclination angle using a gyroscope attached to the UAV, our interest here is to recognize these inclined images without the use of additional sensors in order to mimic how humans perform this task visually. To realize that, we utilize a deep learning method with the combination of tree-based models to build an inclined image recognition system. We have validated the proposed system with the images captured by the UAV. We collected 192 images and labelled them with two different levels of classes (i.e., coarse- and fine-classification). We compared this with several models and the results showed that our proposed system yielded an improvement of accuracy rate up to 3%.