Vol. 1 No. 1 (2022)

Vol. 1 No. 1 (2022)
  • Yavuz Selim TAŞPINAR, Murat SELEK

    It is very difficult for visually impaired individuals to avoid obstacles, to notice or recognize obstacles in distance, to notice and follow the special paths made for them. They continue their lives by touching these situations or finding solutions with the help of a walking stick in their hands. Due to these safety problems, it is difficult for visually impaired individuals to move freely and these situations affect individuals negatively in terms of social and health. In order to find solutions to these problems, a support system has been proposed for visually impaired individuals. The vision support system includes an embedded system with a camera with an audio warning system so that the visually impaired individual can identify the objects in front of him, and a circuit with an ultrasonic sensor so that he can detect the obstacles in front of him early and take precautions. The object recognition system is realized with convolutional neural networks. The Faster R-CNN model was used and in addition to this, a model that we created, which can recognize 25 kinds of products, was used. With the help of the dataset we created and the network trained with this dataset, the visually impaired individual will be able to identify some market products. In addition to these, auxiliary elements were added to the walking sticks they used. This system consists of a camera system that enables the visually impaired individual to notice the lines made for the visually impaired in the environment, and a tracking circuit placed at the tip of the cane so that they can easily follow these lines and move more easily. Each system has been designed separately so that the warnings can be delivered to the visually impaired person quickly without delay. In this way, the error rate caused by the processing load has been tried to be reduced. The system we have created is designed to be wearable, easy to use and low-cost to be accessible to everyone.

  • Esra Kaya, Ismail Saritas

    Brain-Computer Interfaces (BCIs) enable the users to directly communicate with machines based on various desired purposes through brain signals without moving any body parts. Thus, they have become very useful for prostheses, electric wheelchairs, virtual keyboards, and other studies like survey applications and emotion classifications. In this study, EEG signal processing was performed on the BCI Competition III-3a dataset, which contains motor imagery (MI) signals with four classes. Features of the non-stationary EEG signals belonging to three subjects were extracted using Power Spectral Density (PSD) with welch method, Wavelet Decomposition (WD), Empirical Mode Decomposition (EMD) and Hilbert-Huang Transform (HHT). From extracted 900 features, feature space dimension reduction was realized using Autoencoder, an unsupervised learning algorithm. The average accuracy obtained with Artificial Neural Network (ANN) is 74.5% for all binary classifications, which is generally a good result because of the non-stationary nature of EEG signals. 801 features yielded the best classification performance, obtained using an autoencoder with 400 hidden layer neurons.

  • S. T. Patil, Aryan Aher, Aarushi Bhate, Adnan Shaikh, Sandhya Vinukonda

    In recent years, autonomous vehicle (AV) technology has improved dramatically. Self-driving cars have the potential to transform urban mobility in India by offering sustainable, convenient, and congestion-free transportation. However, India confronts challenges such as potholes and the need for enhanced lane detection to make autonomous vehicles a reality. The project's central goal is to create a Convolution Neural Network (CNN) model that can scan and identify its surroundings and move. This paper proposes a project which is accomplished by training CNN with a dataset of images and videos to perform advanced lane identification, pothole recognition, and sophisticated object detection.

  • Mucahit Akar, Kadir Sabanci, Muhammet Fatih Aslan

    The virus known as Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) or Coronavirus Disease 2019 (COVID-19), which emerged from the city of Wuhan in the People’s Republic of China, has affected the whole world. This disease, which is categorized as an epidemic disease, continues to increase despite the various measures taken. It is aimed to reduce death and infected people rates with vaccination studies, inspection and early diagnosis. On the other hand, new types of coronavirus cases are emerging and people are kept under surveillance to prevent the spread of the virus. By keeping the infected people under quarantine, the transmission of the epidemic to more people is prevented. For this reason, early diagnosis kits and tests are vital. Today, various abnormalities are detected by specialists thanks to medical imaging tools. On the other hand, this process is performed on medical images using image processing techniques. Thanks to methods such as image classification, image segmentation, image quantification and various operations such as object detection, localization and quantitative analysis on the object are performed. In this study, it is aimed to detect COVID-19 on lung CT scan images with deep learning methods. CNN-based state-of-art deep learning models, which were pre-trained with millions of images and applied transfer learning method for a similar problem, were used in this study. This process was performed by choosing VGG19, ResNet152 and MobileNetV2 models and the results were compared. According to the performance criteria, validation accuracy of 93.53%, 95% and 87.28% was obtained from VGG19, ResNet152 and MobileNetV2 models, respectively. These results show that these models give good results for the detection of COVID-19 from lung CT scan images.

  • Emre Avuclu, Murat Koklu

    The development of technology has made it considerably easier for people to meet a number of needs. Without technology, it is no longer possible to run certain applications. Nowadays, in many countries, the process of speaking English poses some problems in terms of different situations, such as allocating time for people. In this study, a game-based application was developed to help a person anywhere in the world pronounce English better. The application was implemented in C # programming language. The Speech.dll library was used to introduce voice commands to the system and to perform other necessary operations. Voice commands can be sent by the user via the wireless headset from anywhere in the shooting area. There is no need to wait at the computer while using the application because the developed application gives voice feedback to the user that it is right or wrong after the voice recognition process. In this application letter, word or sentence exercises can be done. The program aims to improve the level of pronunciation of people who want to improve their English speaking skills.