Abstract Introduction Literature Review Methods Problem Statement Objectives Results and conclusion References

Autonomous Car for Indian Terrain

S. T. Patil
Computer Engineering Dept., Vishwakarma Institute of Technology, Pune, India

Aryan Aher
Computer Engineering Dept., Vishwakarma Institute of Technology, Pune, India

Aarushi Bhate
Computer Engineering Dept., Vishwakarma Institute of Technology, Pune, India

Adnan Shaikh
Computer Engineering Dept., Vishwakarma Institute of Technology, Pune, India

Sandhya Vinukonda
Computer Engineering Dept., Vishwakarma Institute of Technology, Pune, India

Abstract

In recent years, autonomous vehicle (AV) technology has improved dramatically. Self-driving cars have the potential to transform urban mobility in India by offering sustainable, convenient, and congestion-free transportation. However, India confronts challenges such as potholes and the need for enhanced lane detection to make autonomous vehicles a reality. In countries like India, lanes are incomplete, causing potential confusion with broken lanes and the need for advanced object detection. Many traffic rules are not followed, leading to potential fatalities if the objects are not correctly detected. The project's central goal is to create a Convolution Neural Network (CNN) model that can scan and identify its surroundings and move. To achieve this, we have experimented with various CNN layers to achieve maximum accuracy and implemented real-time footage-to-image conversion by processing and standardizing the dataset. This paper proposes a project accomplished by training CNN with a dataset of images and videos to perform advanced lane identification, pothole recognition, and sophisticated object detection.

Keywords: Autonomous Vehicle (AV), Convolutional neural network, Deep learning, Neural Network

Introduction

Self-driving cars are technological developments in the automotive field. Self-driving cars are the future of humanity, but they are the most expensive cars. Here, we focused on two applications of Automated Vehicle and designed their prototype vehicles. The only major problem is that when there is heavy traffic, the driver must always apply the brakes, accelerators, and clutches to get to his destination slowly. We have proposed a solution to relax the driver in this situation. This makes the vehicle intelligent, keeps a certain distance from surrounding vehicles and obstacles, makes decisions automatically, and moves.

In recent years, autonomous vehicles have become a reality and are practiced in many cosmopolitan cities. But this does leave room for the question of the efficiency or the reliability of these cars in countries with rugged terrain. In this research paper, we aim to tackle possible problems faced by autonomous vehicles in rough terrain and adjust existing capabilities to the various possible situations that could be met. In this research paper, we have primarily focused on keeping India as a terrain location. As you see, it isn’t easy to tact with things permitted while driving autonomous vehicles. The items are as follows: -

i.        The high number of potholes in India poses a problem of cars slipping or losing balance, leading to accidents.

ii.       With the high prevalence of wildlife in India comes the problem of animals coming onto the road, leading to increased accident rates.

iii.      In Indian traffic, there are many issues, like detecting the traffic signs on the board due to obscurity.   In such cases, the recognition would fail and lead to fatalities.

Literature Review

According to the publication [1], the research aims to develop a self-driving automobile that uses a CNN model to make decisions based on picture input from the camera. The amount of training data and the quality of the object detection model are directly linked to the accuracy and efficiency of a self-driving automobile. In the realm of transportation, this concept asks for a more modern and secure future for all residents.

In paper [2], one approach for self-governing driving is presented under the stimulated condition: the methodologies use deep learning strategies and end-to-end figuring out how to imitate automobiles. The main frame of the driver cloning algorithm is the Nvidia neural network. This image comprises five convolution layers, one levelingS layer, and four fully linked layers. The steering angle is the outcome we get. The usage of autonomous mode results in successful autonomous driving over a preset stimulation path, with the model trained using fewer data sets.

 

The paper [3] presented a deep imitative reinforcement learning (DIRL) framework to train end-to-end driving strategies to accomplish vision-based autonomous automobile racing. They combined IL and RL, using IL to initialize the policy and model-based RL to enhance it further by interacting with an uncertain-aware world model.

The application of the CNN deep learning algorithm for recognizing the surrounding environment in producing the automatic navigation required for autonomous cars is discussed in the paper [4]. In an environmental simulation using the self-driving car simulator, the suggested approach of autonomous cars employing CNN deep learning can operate smoothly without error and is highly stable without oscillation.

[5] in the paper, the project's goal is to contribute to this study by developing a driving simulator for a device that can recognize speed limit signs and make decisions that make driving more comfortable and safer. This research proposes a Yolo-based approach to traffic sign identification in the Clara Stimulator. This project required using a real-time CNN to detect and recognize CARLA speed signals. The car was connected to an RGB camera sensor every five frames, which collected environmental data. Animal detection systems aim to avoid accidents caused by animal-vehicle collisions. Humans are killed, injured, and their property is damaged. Animal Detection Using Template Matching Algorithm 

In this work [6], several object detection techniques were reviewed. Regarding efficiency, the suggested system has a low false positive and false negative rate. Matching Templates Template matching is a technique for recognizing tiny picture areas that should correspond to the template image. To achieve template matching, normalized cross-correlation is implemented. Cross-correlation in signal processing is a measure of similarity between two waveforms as a time-slack component applied to one waveform. This is also known as the sliding point product or/and sliding inner product. Template matching is typically used to search a long-duration signal for an identifiable characteristic. The template may change owing to lighting and exposure conditions for applications utilizing image processing techniques to determine a picture's brightness. Hence the images must first be normalized. This is typically accomplished at each stage by removing the mean and dividing by the standard deviation. We addressed the feature-based template matching approach utilizing NCC in this study.

 

The proposed system in the paper uses rotation, scale, translation, and illumination invariant properties to strengthen the system. In this study, the traffic sign is identified using SURF features-based recognition. To match the extracted features of the Indian Traffic Sign Data (ITSD) base with the extracted features from the annotated region of an acquired image, surf features are extracted. Due to its speed and reliability, the SURF algorithm is used.

Koch and Brilakis [7] proposed a method that uses a histogram shape-based threshold to separate defect and non-defect regions in an image. Based on a perspective view, the authors estimate that a pothole's shape is approximately elliptical. The authors stress the importance of using machine learning in future research.

In 2017, a study in Taoyuan, Taiwan, used a data analytic approach that included correlation and regression analysis; [8] the results showed that regions with a high frequency of road potholes had a higher rate of traffic accidents.  Potholes caused irrevocable damage to pizzas during delivery. Therefore, one of the top pizza businesses in the United States gave a special grant to correct them at a few sites in 2018 [9].

Methods

CNN

[10] In computer vision, convolutional neural networks (CNN) are now widely used. CNN is well-liked because of its consistent, practical outcomes in object identification and recognition. A group of separate filters makes up the convolution layer. The filter covers the entire image, and the dot product is calculated between the filter and various portions of the input image. Feature maps are produced after each filter has been separately convolved with the picture. Deriving a feature map has many applications, one of which is shrinking the size of the image while maintaining its semantic content.

Neural Network

[12] The photographs are given to the model as input. The model is fed photos of various sizes (note dimension) using OpenCV. The model's first layer is a convolution layer with 32 filters of the same size and extent (3x3). This layer is followed by a 20 percent dropout layer, which prevents the model from becoming overfitting. Next, a 64-filter convolution layer with a 3x3 dimension is added. In this research, we demonstrate a self-driving vehicle that uses monocular vision and a CNN model to make decisions based on picture input from the camera. Accuracy and efficiency for a driverless automobile are directly correlated with the volume of training data and the caliber of the object detection model.

Deep Reinforcement Learning

[12] Deep reinforcement learning combines artificial neural networks with reinforcement learning architectures to enable software-defined agents to learn the best possible actions in a virtual environment and achieve their own goals. Deep Reinforcement Learning (DRL) is used to solve a variety of challenges, including Example: Recently complex board games and computer games. However, using DRL to solve actual robotics tasks is more complicated. The preferred approach is to train the agent in the simulator and transfer it to the real world. However, simulator-trained models tend to perform poorly in real-world environments due to the differences.

Problem Statement

In countries across the world, autonomous vehicles have become commonplace but are yet to reach most of the world and still have reliability problems on a large scale. There are still accidents caused by autonomous vehicles that constantly question such vehicles' safety, with terrain that is not always good and road and lane detection which may be more complicated than in most cosmopolitan cities. 

Concerning India in mind, there are a lot of possible issues that could be faced by autonomous cars, such as the sign boards covered by dirt and incomplete lane lines along with twisted roads which can lead to confusion in the decision making which could lead to an error of considering the unfinished lane as a broken lane. There is also a need for improved object detection methodology to detect even motorcycles in high quantities in India. Autonomous vehicles should also be able to do pothole recognition which could also cause imbalance leading to accidents.

Objectives

Our project aims to provide a solution to possible problems that could be faced by autonomous vehicles in difficult situations and terrains, as well as enhance the existing features. The project focuses on using computer vision to implement various object detection algorithms to detect traffic signs, advanced lane detection, and obstacles along the road. Also, to manage the car’s speed when certain things come across the vehicle, it should reduce its speed and make decisions accordingly while driving.

Lane Detection

Figure 1. Lane detection

 

Figure 2. Lane detection

Because most vehicle road accidents occur due to the driver missing the vehicle path, safety is the primary goal of all road lane detecting systems. As a result, various vision-based road identification algorithms have been created to avoid vehicle collisions. A horizontal straight line is drawn to detect a lane that crosses the extended section at red locations, represented by red circles. The points are close and clustered in a group based on their distances.

Object Detection

They detect objects on the road. The three critical sensors utilized by self-driving cars work in tandem, much like the eyes and brain of a human. These sensors include cameras, radar, and lidar. When utilized simultaneously, they give the automobile a good picture of its surroundings. They help the vehicle determine the position, speed, and 3D forms of objects in its surroundings.

Object detection is a computer vision job that is utilized in a variety of consumer applications, such as surveillance and security systems, mobile text recognition, and illness diagnosis with MRI/CT scans. Object detection is a critical component of autonomous driving. Autonomous vehicles rely on the perception of their environment to enable safe and reliable driving. Object detection algorithms are used by this perceptual system to precisely determine things in the vehicle's vicinity, such as pedestrians, autos, traffic signs, and obstacles. Deep learning-based object detectors are crucial for detecting and localizing these things in real time. This article discusses cutting-edge object detectors and open issues for their integration into self-driving automobiles. 

Figure 3. Object detection

 

Figure 4. Object detection

 

Pothole Detection

 

Figure 4. Pothole on a road

The goal is to predict if there will be potholes in a certain number of frames. Detects road depressions using a live video feed processed by the CNN model. The video will be converted to a specific number of frames. After that, all images are preprocessed. Image pre-processing involves converting all images from color to grayscale (to reduce processing power) and resizing all images to the same size. H. 300 x 300 pixels, produce the output value corresponding to each image from the dataset used for training.

All the images from the dataset are processed and divided into training and testing datasets. 

The processed image is passed to a CNN model for pothole detection. The CNN model is a sequential model having two convolutional layers with Relu as an activation function followed by an average pooling layer. 

 

Figure 5. Flow for pothole detection

Signboard Detection

To create a system that can detect and recognize text and symbols on traffic panels based on street-level pictures. On the roadside, there are numerous text-based traffic signboards. It takes much work to capture all the signboards manually. Most of the current Automatic Signboard Recognition Systems (ASRs) are based on symbols. The need is to conduct additional research on text-based ASR.

Results and conclusion

The project shows an approach to developing a system for autonomous vehicles to operate on Indian roads and terrain. The fundamental idea behind the project is to create an autonomous car that can sense its environment and move without human input. This paper proposes Car automation, which is accomplished by recognizing the road, signals, obstacles, and stop signs, responding and making decisions such as changing the course of a vehicle, stopping at red signals, and moving on green calls using machine learning techniques.

Future Scope

Based on the planning of our project, there can be some recommendations to improve the features of the system to make it more users friendly and effective:

i.        It has a possibility of further enhancement by using hardware to implement basic models. 

ii.       It can be further enhanced by increasing the model’s accuracy.

iii.      Decision-making can be increased and managed according to the new information and object detection.

Acknowledgment

We want to thank and respect the supporters of this project. We acknowledge the efforts of those who have contributed significantly to our project. We express our gratitude and appreciation to Professor ST Patil for his valuable time and guidance during this project. We express our sincere and deep gratitude to the Vishwakarma Institute of Technology, Pune, for allowing us to work on such a fascinating project.

References

  1. N. Sanil, P. A. N. venkat, V. Rakesh, R. Mallapur and M. R. Ahmed, "Deep Learning Techniques for Obstacle Detection and Avoidance in Driverless Cars," 2020 International Conference on Artificial Intelligence and Signal Processing (AISP), 2020, pp. 1-4, doi: 10.1109/AISP48273.2020.9073155.
  2. Chirag Sharma , S. Bharathiraja , G. Anusooya, “Self Driving Car using Deep Learning Technique”, International Journal of Engineering Research & Technology (IJERT) Volume 09, Issue 06 (June 2020).
  3. P. Cai, H. Wang, H. Huang, Y. Liu and M. Liu, "Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement Learning," in IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7262-7269, Oct. 2021, doi: 10.1109/LRA.2021.3097345.
  4. I. Sonata, Y. Heryadi, L. Lukas, και A. Wibowo, ‘Autonomous car using CNN deep learning algorithm’, Journal of Physics: Conference Series, τ. 1869, τχ. 1, σ. 012071, Απριλίου 2021.
  5. Y. Valeja, S. Pathare, D. Patel and M. Pawar, "Traffic Sign Detection using Clara and Yolo in Python," 2021 7th International Conference on Advanced Computing and Communication Systems (ICACCS), 2021, pp. 367-371, doi: 10.1109/ICACCS51430.2021.9442065.
  6. N. Banupriya, S. Saranya, R. Swaminathan, S. Harikumar, και S. Palanisamy, ‘Animal detection using deep learning algorithm’, J. Crit. Rev, τ. 7, τχ. 1, σσ. 434–439, 2020.
  7. C. Koch and I. Brilakis, “Pothole detection in asphalt pavement images,” Advanced Engineering Informatics, 01-Feb-2011. [Online].
  8. B. -H. Lin and S. -F. Tseng, "A predictive analysis of citizen hotlines 1999 and traffic accidents: A case study of Taoyuan city," 2017 IEEE International Conference on Big Data and Smart Computing (BigComp), 2017, pp. 374-376, doi: 10.1109/BIGCOMP.2017.7881696.
  9. D. O'Carroll, “For the love of pizza, Domino's is now fixing potholes in roads,” Stuff, 12-Jun-2018. [Online]. Available: https://www.stuff.co.nz/motoring/104643123/for-the-love-of-pizza-dominos-is-now-fixing-potholes-in-roads. [Accessed: 30-Oct-2022]. 
  10. S. Uchida, S. Ide, B. K. Iwana and A. Zhu, "A Further Step to Perfect Accuracy by Training CNN with Larger Data," 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2016, pp. 405-410, doi: 10.1109/ICFHR.2016.0082.
  11. M. Egmont-Petersen, D. de Ridder, and H. Handels, “Image processing with neural networks-A Review,” Pattern Recognition, 19-Jun-2002. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0031320301001789. [Accessed: 30-Oct-2022]. 
  12. Y. Li, “Reinforcement learning in practice: Opportunities and challenges,” arXiv.org, 22-Apr-2022. [Online]. Available: https://arxiv.org/abs/2202.11296v2. [Accessed: 30-Oct-2022]. 




Intelligent Methods in Engineering Sciences, Volume 1, Issue 1