Our Project name is viSparsh.This word has been coined by our team. It is a conglomeration of two words namely vision (Vis) & Touch (Sparsh). We felt the name ViSparsh truly depicted the purpose of our project which was to aid the visually impaired people through the sense of Touch. We also thought this as an apt name for our team too whereby the same word would signify our team's aim of touching the lives of several people through our vision.
The aim of the project is to develop a Haptic Belt with the XBOX360 Kinect for visually impaired persons where the initial technical effort needs to be taken to develop some relatively low-cost technology projects and integrating them into a socially useful application. The project would be one of its kind in India considering the fact where plethora of technological products are already present, only a few actually creates an impact needed for the Indian Society. We are aiming to build a state of the art, easy to use customized for the Indian Market.
Final product will be not only be a navigating device utilizing Kinect, GPS, Digital Compass but also an entertainment and utility product.

Hand Gesture Recognition in Android

Gesture Recognition In Android

Gestures are a powerful means of communication among humans. In fact, gesturing is so deeply rooted in our communication that people often continue gesturing when speaking on the telephone. Hand gestures provide a separate complementary modality to speech for expressing ones idea. Information associated with hand gestures in a conversation is degree, discourse structure, spatial and temporal structure. So, a natural interaction between humans and computing devices can be achieved by using hand gestures for communication between them.
The purpose of this project is to provide a highly sophisticated Human Machine Interface.
The camera of the computing device is opened simultaneously with a photo viewer application. Appropriate gestures are made. These gestures are captured by the camera and the type of the gestures and determined by certain algorithms and these gestures are converted into computer understandable commands. These commands are mapped to an application where the intended actions are performed.
This new gesture based approach allows the users to interact with computers through hand postures, being the system adaptable to different light conditions and background. Its efficiency makes it suitable for real-time applications.
Gesturing can be used by developers as a tool for development of a wide range of applications and by typical users who use smart phones and tablets that run Android. People who are physically handicapped will also find this system very useful.

System Analysis

The application user performs gestures using hand. A gesture recognition system uses a video camera to capture images of the hand movement. It captures the live stream and extracts that into frames. The gesture-recognition software tracks the moving hand features, it identifies the motion and sends it to the android application. The android application then issues commands to the currently running application.
Operating environment
Hardware requirements
 An evaluation kit with OMAP 4430 processor (PandaBoard).
 A motion sensing camera.
 Ram: 120MB or more.
 Hard disk: Minimum 200MB.

Software requirements
 Android SDK 2.0 or more.
 Android NDK.
 Eclipse IDE.
 JAVA and XML.

Functional requirements
The system is required to perform the following functions.
 Switch on the camera and open an application simultaneously.
 The camera should be on in the video capturing mode and should run in the background as the intended application should remain in display.
 Capture the gestures made by the user of the device by the motion sensing camera present.
 Perform corresponding actions for the appropriate gestures made by the users
Non-functional requirements
 Dalvik virtual machine optimized for android devices.
 Rich development environment including device emulators, tools for debugging, memory and performance profiling, and a plug –in for the Eclipse IDE.
 The system is expected to run on low memory devices also.
 The response time should be very less. i.e., a response action should be performed as and when the gestures are made.
 The system should neglect the inappropriate gestures made by the user.
 Availability of the system depends on availability of the device and its service.
 Documentation provided by the application is simple and easily can be understood.
 Platform compatibility is limited to android devices.
 The product build is scalable.
 Usability by the target user community is given utmost importance.

BY PES School of Engineering,
Bangalore India.


System on Module Base OMAP4430,and a Clone to PandaBoard .with Size of 40mm*40mm*2.5mm
1.Processor :OMAP4430
1GBytes LPDDR2
8GBytes eMMC Flash
5.10/100 Ethnernet (SPI interface)


picoFlamingo is a portable presentation solution initially developed for the BeagleBoard and picoDLP projector, but it can be executed in any OpenGL ES 2.0 compliant system. Slides can contain text, images, live video streams, and 3D objects that can be animated in a 3D space and dynamically updated to produce advanced user interfaces. When used in combination with NetKitty, picoFlamingo can be controlled remotely through any Bluetooth or network enabled device. Simple remote control tools for Symbian S60, OpenMoko, and Android 1.5 are included. A set of small applications for video streams and voice commanding are also included.

Panda in a Tree

Panda in a Tree: or, where to GoogleEARTH
EA board
2011 testbed for graphical display adaptations with Google Earth. although in image manipulation programs several Open Source applications address the creation and editing of the image format, perhaps the OMAP display properties can permit works in cartographic format. where technical drawing and engineering plans cover some of the image-format application, land maps with features and topological overlays would also utilize load processing of intense data streaming. camera offloading, scanner image lifting are other tasks that might adapt to the TI platform.

CSI-2 & FPGA Acceleration Project

After a Successful extension of TI's ARM Cortex-M3 LM3S9B92 MCU with a FPGA as a Memory Controller Hub and Data Acquisition platform, I'm about to try and do the same work for the PandaBoard!

This Project will convert almost any kind of unsupported / expensive / or "hard to get you hands on" device for the PandaBoard with the Academic $59 DE0-Nano FPGA development board.
Five main interfaces are the main objectives:

1. Memory Mapped GPMC Slave - Half way there (TI's Cortex-M3 EPI is just like the GPMC)
2. CSI-2 Transmitter Core - All relevant specifications obtained!
3. Multi-CMOS/CCD Interfaces Controller - LVDS/Parallel.
4. DAC/ADC controller - Done!
5. Memory Mapped Stub Interface - for your generic needs (GPIOs, SPI, I2C, CF, etc) - Done!

While most of us are trying to get our hands on some decent MIPI CSI-2 CMOS/CCD sensor for our OMAP4 based board I'm going to take it a step further and build a complete FPGA based interface for both of the GPMC and CSI-2 interfaces with the $79 ($59 Academic) Terasic's DE0-NANO Development kit.

The bonus of using this FPGA development kit is that is has:
1. ADI ADXL345, 3-axis accelerometer with high resolution (13-bit)
2. NS ADC128S022, 8-Channel, 12-bit A/D Converter

It can run a soft 32bit processor (NIOS II) with ANSI C.

Early availability of the Verilog FPGA core for the GPMC, based on Altera's Avalon-MM Slave are available!

I'm short of a working PandaBoard (my friend's board was damaged by my evil cup of coffee).

I'm in need of some other parts like:
1. Misc CCD/LVDS sensors.
2. Power ICs, LDOs, Regulators
3. Step motors and motor drivers.
4. Other FPGA development board for testing and verification.

Most of all I need people that are willing to co-work on the Linux & FPGA development.

Contact me:


Project website: 

Panda Class Driver

A Class driver is a type of hardware device driver that can operate a large number of different devices of a broadly similar type. Class drivers are very often used with USB based devices, which share the essential USB protocol in common, and devices with similar functionality can easily adopt common protocols. Instead of having a separate driver for every kind of device, a class driver can operate a wide variety of devices from different manufacturers. To accomplish this, manufacturers make their products compatible with a standardized protocol. A class driver is used as a base or ancestor class for specific drivers which need to have slightly different or extended functionality, but which can take advantage of the majority of the functionality provided by the class driver.
In this Project we intend to build Panda Class driver for USB Web Camera, Thumb Scanner and USB Storage Device for Community supported Texas Instrument OMAP 4430 based Panda Board.

Expected Result:

Integrated USB driver able to detect gadgets like Web camera, Thumb scanner and block storage devices

User interface sharing over Wi-Fi between Panda units

This Project aims to synchronize Desktop environment running on one panda board with another panda board over Wi-Fi. The project involves segmentation of the desktop using the effective algorithm and segmented information is structured in custom format. The output of the algorithm is then encoded with the help of efficient encoding method to minimize the need of network resources. The encoded output is sent over Wi-Fi using RTP towards another Wi-Fi terminal, which can decode the input data stream and appropriate display is created. All the changes in the senders display are identified and corresponding changes are transmitted so that only modified information is sent over Wi-Fi to show relative changes on receiver’s end.

Expected results:

A system using PANDABOARD that can stream complete user interface information of one unit to another display unit over Wi-Fi.

PANDACLOUD -A Prototype for cloud computing platform on panda board

Use PANDA to create and maintain ad-hoc cloud to provide platform as a service. Today in the
era of smart phones and tablets having a hardware constraint to run any application residing in
your machine is a major drawback. So consider a scenario where you are traveling (Bus/Trains/
Flight), where there are several phones/computational devices, of which while some are highly
active, some may be totally dormant, thus wasting a lot of processing power when there is a need
for it. So having a single PANDA as a controlling device (cloud server) , all the platform could
register itself to the cloud to provide its computational capabilities (also to use), thus acting as a
CPU hot-plug to add more cores to PANDA as and when they register , thus creating an ad-hoc
cloud system dynamically without any additional resources. With good load prediction algorithm
in place, Applications can be launched in this multi-core system, without having to worry about
the device's own processing power.


< AVR On The Go >

A small, portable AVR programmer*.

*: The programmer shall have a small screen, micro keyboard, autonomous power source and ISP(6/10)pins and JTAG connections to connect with the targets.

It would be on running an embedded Linux distribution and have an environment setup for compiling C/C++ & ASM code for the AVR.

Theme provided by Danetsoft under GPL license from Danang Probo Sayekti