Toward Computationally Lightweight Stationary and Mobile Computer Vision-based Traffic Surveillance for Assistive Devices in Intelligent Transportation Systems

Date of Award


Degree Type


Degree Name

Doctor of Philosophy (PhD)


Electrical Engineering and Computer Science


Senem Velipasalar


Advanced Driver Assistance Systems, Assistive Devices, Embedded Smart Cameras, Intelligent Transportation Systems, Lightweight Algorithms, Traffic Surveillance Systems

Subject Categories

Electrical and Computer Engineering


Intelligent Transportation Systems (ITS) is a technology that is currently tightly integrated into virtually every aspect of the nation's infrastructure. The high flexibility of vision sensors, as well as their low cost, have proven to be a driving force in the advancement of computer vision for traffic surveillance. Although many advances have been made to existing traffic surveillance systems for various ITS applications, another side of the ITS industry — advanced driver assistance systems — has not been as quick to develop. With recent advances in embedded systems, such as the introduction of low-power embedded smart cameras, potential safety applications can begin to be developed for the consumer market.

The goal of this research is to highlight and provide a solution to several key problems facing any ITS-related computer vision application:

1. Lack of hardware-independent and portable solutions for traffic surveillance from either a moving or a static platform;

2. Absence of computer vision-based systems that do not require strict positioning guidelines and interaction with the user for proper operation;

3. No cost-effective solutions exist for ITS traffic monitoring applications that can be easily deployed by the customer; and

4. Lack of generalized algorithms for prototyping and designing lightweight ITS computer vision-based algorithms for embedded smart camera platforms.

Even current state-of-the-art research and commercially-available computer-vision products designed for the ITS industry are highly dependent on camera localization and a perfect set of conditions during initialization.Driver assistance systems— systems that are installed within the vehicle, designed to provide feedback to the driver — are highly dependent on the angle of the camera for proper detection or require a manual definition of regions-of-interest (ROI), in addition to relying on sensors other than cameras for proper operation; andtraffic surveillance systems— systems that observe traffic from a stationary point — require post-install vendor support for manually defining ROI or adjusting the camera angle for proper detection. No automated and position-independent image processing solutions exist, much less any generalized lightweight algorithms for designing and prototyping ITS-related computer vision-based algorithms.

Algorithms for static and mobile vehicle detection and tracking are developed in this dissertation, which include headlight and taillight tracking, traffic pattern analysis for ROI extraction, and vehicle counting for connected vehicle applications. As a capstone proof-of-concept, a system for detecting, tracking, and reporting the state of traffic signal lights is developed, with potential applications for colorblind and vision-deficient users.

When examined collectively, the results presented in this dissertation demonstrate that a reliable ITS-related system can be achieved using only a monocular camera, without the need (when appropriate) for costly sensor-fusion approaches. Instead of sensor fusion,fusion of lightweight algorithmsis used to develop ITS algorithms that are less computationally expensive and, in a number of scenarios, more reliable than similar approaches.

In addition, the results show that most ITS computer vision algorithms can be implemented on embedded smart cameras, minimizing the overall system cost and power requirements. Such results are especially important when designing cost-effective commercial driver assistance systems and traffic surveillance systems.


Surface provides description only. Full text is available to ProQuest subscribers. Ask your Librarian for assistance.