Famous Blog Posts

Famous Blog Posts Collection Among Best Websites in Techno Entrepreneurships

Sunday, 21 April 2013

SMART CAMERA & APPLICATIONS FOR COMMON MAN


“Camera” as a technology is scaling new heights in various fields and applications. Development of cameras is also changing similarly. Now we can see camera in portable mediums like pen. They are actually helping us in many ways from clicking a simple photograph to detecting of elements or metals.

These detective cameras are called “smart cameras” and are used for various purposes like traffic 
....continued below.....




Samsung Smart Camera NX1000 Pink: This is a photograph by 'SamsungTomorrow', as posted on Flickr. To view this photographer’s photostream and more, click on Image.

....continued from above.....

surveillance, smart rooms, tracking and many more applications. Further the smart cameras will be used as an embedded computing systems on a single board or chip, so as to deal with their high performed applications. 


What is a Smart Camera?
The smart Camera is a total handy unit of vision systems which can be used anywhere, where image processing can be applied. Smart cameras are equipped with a high performance onboard computing and communicating infrastructure, combining in a single embedded device.

Smart camera’s contains on board processor embedded with charge coupled device image (CCD) sensor, which provides an easily distributed and all-in-one vision system, which further transmits inspection results along with the place or raw images.

In usual vision system scenarios, only a small fraction of a picture or a frame will be the region of interest (ROI), where in smart camera’s the whole picture itself will become (ROI), so that the processing will be done as soon as the image is captured. To be specific, an image of some gigabytes will become some small amount of bytes after processing. This is exactly called as “brain behind the eyes”. 


SMART CAMERA’S V/S STANDARD SMART VISION SYSTEMS : 
In traditional vision systems, some PC-based approach have been used with minimum number of algorithms. So there will not arise a question of microprocessors, DSPs, FPGAs. But while speaking about a smart embedded camera they are the necessary tools for the implementation of a high performance camera, which could be used in various applications as mentioned above.

However the DSPs and FPGAs are becoming faster nowadays. Traditional approach contains PC-based implementation. This could be either using a camera with capability to interfere directly with PC for processing of images captured.

Smart camera on the other hand, is a self-contained unit, which consists processor embedded on a single chip or board, to do all that task which could be done using PC-based approaches. The smart camera has sensors which are flexible and are inherently capable of handling many more imaging algorithms and applications.

Assembly of various parts on a single chip with camera. 
HISTORY OF GENERATION OF SMART CAMERA :

From analog to digital cameras: 

• 1st generation surveillance: Analog equipment ( circuit Vclosed circuit TV cameras transmitted video signal over analog lines) 

• 2nd generation: digital back-end components; allow real time automated analysis of incoming data 
• 3rd generation: complete digital transformation; video converted in digital domain at the camera and transmitted via a computer network; cameras can also compress video to save bandwidth. 
• 4th generation: intelligent cameras; perform low-level image processing operations on the captured frames onboard to improve video compression and intelligent host efficiency. However, most of the processing is done at a central unit 

But “smart cameras” directly perform highly sophisticated video analysis, video sensing, video processing, and communication. They are designed as reconfigurable and flexible processing nodes with self-reconfiguration, self-monitoring, and self-diagnosis. 

Capabilities: 

Shift from a central to a distributed control surveillance system 
Increase the surveillance system’s functionality, availability, and autonomy 
Can react autonomously to changes in the system’s environment 
Can detect events in the monitored scenes. 
A static surveillance system configuration is no longer feasible! 

PROPOSED ARCHITECTURE :
  • scalable, embedded, high-performance, multiprocessor platform consisting of a ◦ network processor ◦ a variable number of digital signal processors (DSPs) 
  • commercial off-the-shelf software/hardware architecture was chosen ◦ support fast prototype development ◦ achieve flexibility and performance at a reasonable price. 
 
Smart Camera Architecture Block Diagram 

The smart camera presented in this communication will reduces the data of interest field by making use of the image processing sensors. 


As the figure shows the exact way the camera works, it’s the overall view of working of  the camera. 

Hardware Architecture: 3 parts 
1.
 
Sensing unit 
a. Monochrome CMOS image sensor 
b. delivers images with VGA resolution at up to 30 fps 
c. transfers images via a first-in, first-out (FIFO) memory to the PU 

2.   Processing unit (PU) 
a. Up to 10 Texas Instruments TMS320C64x DSPs can deliver an aggregate performance of up to 80 GIPS while keeping the power consumption low 
b. PCI bus couples the DSPs and connects them to the network processor 

3.   Communication unit 
a. network processor: Intel XScale IXP425
b. establishes the connection between the processing and communication units
c. controls internal and external communication
d. currently supports two interfaces for IP-based external communication: Wired Ethernet and wireless Global System for Mobile Communications/general packet radio service (GSM/GPRS) 

So as to get the final result of the image that has sensed by the camera sensor, one should undergo the above all stages and finally will land up to the result of interest. 
All this process will be carried in a single chip or board, which contains sensing unit for the video sensors, processing unit and the communication unit to connect with various software tools and algorithms. Once the result has been generated the copy of the result will be saved in a separate memory, so that the user can retrieve from the memory.
Software Architecture: 2 frameworks :

1. DSP framework – runs on every DSP
Provides an abstraction of the hardware and communication channels: As the camera captures the picture or image or video it should be sensed by 
Sensing unit after this job the image or video should be processed, here comes the use of DSP (Digital Signal Processors).
Supports dynamic loading and unloading of application tasks 
Manages the DSP’s on-chip and off-chip resources 
Algorithms on different DSPs use the service management facilities to dynamically establish connections to each other 
The DSP framework was built on Texas Instruments’ DSP/BIOS operating system. 

2. SmartCam framework - runs on the network proc 
An abstraction of the DSPs to ensure the application layer’s platform independence 
Application layer uses the provided communication methods to exchange information 
Internal messaging to the DSPs 
External IP-based communication 
Application development by high-level interfaces to DSP algorithms and the DSP framework’s functions 
XScale processor runs standard Linux only customization of the Linux kernel is the DSP kernel module and processor uses it to establish the connection to the DSPs via the PCI bus.

Standard Smart Vision System
 
Processing of images :

SOFTWARE AND PROGRAMMING TOOLS : 

This section discusses the programs that run on an embedded system as well as software tools that are necessary or helpful to implement those programs and to transfer them to the embedded system.
DSPs are usually programmed in C at first, followed by machine code optimization for critical parts. A DSP rarely just executes one tiny program on an endless stream of rather uniform data, but instead has to perform some general tasks occasionally. Thus, it is usually controlled by a DSP operating system (OS).
A large number of companies offer an even larger number of them, frequently classified as a real-time OS. Linux is a common choice due to its flexibility, particularly on Systems-on-Chips. Following are some of your choices for operating for DSPs and/or SoCs. One of the main characteristics of these OSs is their small footprint, typically only one to tens of MB.
• Valourtech (vtLinux)
• Arcturus Networks (uClinux)
• Consumer Electronics Linux Forum (CELF) Linux
• MonteVista Software
• Mentor Graphics (Nucleus PLUS RTOS)
• Palm Inc. (PalmOS)
• Microsoft Corp. (Windows CE/Mobile)
• ulTRON
• LineouSolutions (Linux)
• LynuxWorks (BlueCat Linux)
• Symbian Ltd. (Symbian)
• Metrowerks, now Freescale (Linux)
• Pigeon Point Systems (Monterey Linux)
• Wind River (VxWorks)
• Texas Instruments (DSP/BIOS RTOS) 

Further resources that might be helpful: The “Pocket Guide to Processors for DSP,” at http://www.bdti.com/pocket/pocket.htm , and “The Scientist and Engineer’s Guide to Digital Signal Processing” by StevenW.Smith, at http://www.dspguide.com/ . 

Distributed System Architecture 

Use the smart cameras to implement a distributed intelligent video surveillance system (IVS) 

Partition IVS into distributed logical groups (surveillance clusters) 

IVS requires an assignment of cameras to a specific cluster. Dynamically and autonomously maps surveillance tasks into individual cameras depending on their resources and the system’s current state. 

Tasks are implemented onto cameras using a mobile agent system (MAS) built atop the SmartCam framework. Changes in the environment trigger a task mission. 

Quality of Service (QoS): Parameters include frame rate, transfer delay, image resolution, and video-compression rate levels can change over time due to user interactions or changes in the monitored environment (so novel IVS systems must include dedicated QoS management mechanisms).

Power awareness: Camera supports combined power and QoS management (PoQoS) for distributed IVS systems.

PoQoS dynamically configures the power and QoS level of the camera’s hardware and software to adapt to user requests and changes in the environment. 

Experimental results : 

Two identical SmartCam prototypes .Integrated up to three additional PCs (Pentium III running under Linux at 1 GHz) to evaluate larger SmartCam networks. Complete SmartCam framework and the MAS could execute on the PC without any modification. Diet agents running under Java as the MAS and applied the JamVM Java virtual machine on the smart camera prototype. Compared the SmartCam prototype’s Java performance with that of a standard PC 

The results showed that the interpreter-based JamVM is about 20 times slower than the Sun Java runtime environment (JRE) 1.4.2 on the PCs. The native computing performance between a Pentium III PC and the SmartCam (XScale) differs only by a factor of two. 

Advantages of Smart Camera : 

Cost - Smart cameras are generally less expensive to purchase and set up than the PCbased solution, since they include the camera, lenses, lighting (sometimes), cabling and processing. 

Simplicity - Software tools available with smart cameras are of the point-and-click variety and are easier to use than those available on PC's. Algorithms come pre-packaged and do not need to be developed, thus making the smart camera quicker to setup and use. 

Integration - Given their unified packaging, smart cameras are easier to integrate into 
the manufacturing environment. 

Reliability - With fewer moving components (fans, hard drives) and lower temperatures, smart cameras are more reliable than PC's. 

Applications of Smart Camera - 

Multi-camera object-tracking application 
Multi-camera system instantiates only a single tracker (agent) task, The agent follows the tracked object migrating to the Smart Cam that should next observe the object.
Tracking agent based on a Kanade-Lucas-Tomasi feature tracker 
Main advantage is its short initialization time 
Applicable for multi-camera object tracking by mobile agents 
Tracking
agents control the handover process, using predefined migration regions 
 When the tracked object enters a migration region, the tracker initiates handover to the next Smart Cam 
Each migration region assigned to one or more possible next SmartCams
 Motion vectors help distinguish among several Smart Cams assigned to the same migration region 
Motion vectors check whether the object moves in the correct direction 
A master-slave approach for the tracked object handover 
Tracking agent’s migration between Smart Cams takes up to 1 second 

Task-allocation system’s setup time—approximately 190 milliseconds


The approach is good considering they are using off the shelf products. The amount of memory and power dissipation are higher than the design would require ◦ it is good for testing and research but not suitable in real world situations. 

No comments:

Post a Comment