Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Vehicle classification in video using deep learning

Published in 15th International Conference on Machine Learning and Data Mining, 2019

Vehicle classification in videos has broad applications in intelligent transportation and smart cities. The vehicle classes are defined according to the Federal Highway Association (FHWA) vehicle types, and two popular deep learning methods, namely, the Faster R-CNN and the YOLO, are applied for vehicle classification. The Faster R-CNN and the YOLO are two representative deep learning methods with applications in object detection and classification. First, three training data sets are manually created from two videos in the low video quality category for training the Faster R-CNN and the YOLO deep learning methods. Second, new videos that are not seen during training are used to evaluate the vehicle classification performance for the deep learning methods. In particular, the comparative evaluation includes the training time, the testing time, the vehicle classification accuracy, as well as the generalization performance of the deep learning methods. The experiments using the New Jersey Department of Transportation (NJDOT) traffic videos show the feasibility of vehicle classification in videos using deep learning methods.

Recommended citation: @inproceedings{faruque2019vehicle, title={Vehicle classification in video using deep learning}, author={Faruque, Mohammad O and Ghahremannezhad, Hadi and Liu, Chengjun}, booktitle={the 15th International Conference on Machine Learning and Data Mining}, pages={117--131}, year={2019} } https://www.researchgate.net/publication/346061113_Vehicle_Classification_in_Video_Using_Deep_Learning

A New Online Approach for Moving Cast Shadow Suppression in Traffic Videos

Published in IEEE International Intelligent Transportation Systems Conference (ITSC), 2020

In applications of traffic video analysis, moving vehicles can induce cast shadows that have negative impacts on the system performance. Here, a new online cast shadow removal method is proposed which integrates pixel-based, region-based, and statistical modeling techniques to detect shadows. Specifically, the global foreground modeling(GFM) method is first applied in order to segment the moving objects along with their cast shadows from the stationary background. The potential shadow pixels are identified by considering the physics-based properties of reflection and comparing the changes in color values in the corresponding background and foreground locations in terms of brightness and chromaticity. A new region-based shadow detection method is proposed using an illumination invariant feature as the input to the k-means clustering method in order to partition each foreground component into separate segments. Each segment is classified into object and shadow based on its portion of potential shadows, the amount of gradient information introduced, and the number of extrinsic terminal points contained. Afterward, the background and foreground values in the RGB and HSV color-spaces are utilized to construct six-dimensional feature vectors which are modeled by a mixture of Gaussian distributions to classify the foreground pixels into shadows and objects. Lastly, the results of the previous steps are integrated for final shadow detection. Experiments using public video data ‘Highway-1’ and ‘Highway-3’, and real traffic video data provided by the New Jersey Department of Transportation (NJDOT) demonstrate the effectiveness of the proposed method.

Recommended citation: @inproceedings{ghahremannezhad2021new, title={A New Online Approach for Moving Cast Shadow Suppression in Traffic Videos}, author={Ghahremannezhad, Hadi and Shi, Hang and Liu, Chengjun}, booktitle={2021 IEEE International Intelligent Transportation Systems Conference (ITSC)}, pages={3034--3039}, year={2021}, organization={IEEE} } (https://github.com/hadi-ghnd/hadi-ghnd.github.io/tree/master/files/shadow1.pdf)

A Real Time Accident Detection Framework for Traffic Video Analysis

Published in International Conference on Machine Learning and Data Mining, 2020

Traffic accident detection is an important topic in traffic video analysis, and this paper discusses single-vehicle traffic accident detection. Specifically, a novel real-time traffic accident detection framework, which consists of an automated traffic region detection method, a new traffic direction estimation method, and a first-order logic traffic accident detection method, is presented in this paper. First, the traffic region detection method applies the general flow of traffic to detect the location and boundaries of the roads. Second, the traffic direction estimation method estimates the moving direction of the traffic. The rationale for estimating the traffic direction is that the crashed vehicles often make rapid changes of directions. Third, traffic accidents are detected using the first-order logic decision-making system. Experimental results using the real traffic video data show the feasibility of the proposed method. In particular, traffic accidents are detected in real-time in the traffic videos without any false alarms.

Recommended citation: @inproceedings{ghahremannezhad2020real, title={A real time accident detection framework for traffic video analysis}, author={Ghahremannezhad, Hadi and Shi, Hang and Liu, Chengjun}, booktitle={the 16th International Conference on Machine Learning and Data Mining}, pages={77--92}, year={2020} } http://academicpages.github.io/files/A Real Time Accident Detection Framework for Traffic Video Analysis.pdf

Robust Road Region Extraction in Video Under Various Illumination and Weather Conditions

Published in Fourth IEEE International Conference on Image Processing, Applications and Systems, 2020

Robust road region extraction plays a crucial role in many computer vision applications, such as automated driving and traffic video analytics. Various weather and illumination conditions like snow, fog, dawn, daytime, and nighttime often pose serious challenges to automated road region detection. This paper presents a new real-time road recognition method that is able to accurately extract the road region in traffic videos under adverse weather and illumination conditions. Specifically, the novel global foreground modeling (GFM) method is first applied to subtract the ever-changing background in the traffic video frames and robustly detect the moving vehicles which are assumed to drive in the road region. The initial road samples are then obtained from the subtracted background model in the location of the moving vehicles. The integrated features extracted from both the grayscale and the RGB and HSV color spaces are further applied to construct a probability map based on the standardized Euclidean distance between the feature vectors. Finally, the robust road mask is derived by integrating the initially estimated road region and the regions located by the flood-fill algorithm. Experimental results using a dataset of real traffic videos demonstrate the feasibility of the proposed method for automated road recognition in real-time.

Recommended citation: @inproceedings{ghahremannezhad2020robust, title={Robust road region extraction in video under various illumination and weather conditions}, author={Ghahremannezhad, Hadi and Shi, Hang and Liu, Chenajun}, booktitle={2020 IEEE 4th International Conference on Image Processing, Applications and Systems (IPAS)}, pages={186--191}, year={2020}, organization={IEEE} } (https://www.researchgate.net/publication/346084891_Robust_Road_Region_Extraction_in_Video_Under_Various_Illumination_and_Weather_Conditions)

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.