Vous êtes sur la page 1sur 6

ELECTRICAL DEVICES AND THEIR CONTROL

Shubhanshi Gupta
Abstract Devices in Electrical engineering generally deal with application of electricity, electronics and electromagnetism. The aim of this paper is to produce a comparative study between the different technologies of controlling electrical devices and electronic systems to facilitate personal transportation system for the common people by merging daily transportation mechanical system with the wireless technology so as to recover the drawbacks in the mechanised system of control and make life easier. Keywords Automotive Electronics, Wireless technology, Hand Gesture technology, Electrical Devices, Speech Recognisation Technology, Remote Control Technology.

transportation machines which have to be drawn mechanically or require manpower input in any form. For example: A Wheelchair These are broadly classified to two types: Manually propelled and Electric powered. Most manual wheelchairs have two push handles at the top of the back to allow for manual propulsion by a second person. This makes the seater dependent on others for movement or gives him/her a choice to drag oneself alone by turning the wheels by their own, which is definitely not possible for everyone to do. On a contrast, the Electrically Driven or Motorized Wheelchairs deliver a great deal of comfort to its users as they can be easily controlled. Its purpose is to reduce or eliminate the user's task of driving a motorized wheelchair. Usually, a smart wheelchair is controlled by a computer, has a suite of sensors and applies techniques in mobile robotics, but this is not necessary. The interface may consist of a conventional wheelchair joystick, or it may be a "sip-and-puff" device or a touch-sensitive display connected to a computer. This is different from a conventional motorized or electric wheelchair, in which the user exerts manual control over motor speed and direction via a joystick or other switchor potentiometer-based device, without intervention by the wheelchair's control system. Motorized wheelchairs are useful for those unable to propel a manual wheelchair or who may need to use a wheelchair for distances or over terrain which would be fatiguing in a manual wheelchair. They may also be used not just by people with 'traditional' mobility impairments, but also by people with cardiovascular and fatigue based conditions. Its purpose is to reduce or eliminate the user's task of driving a motorized wheelchair. Usually, a smart wheelchair is controlled by a computer, has a suite of sensors and applies techniques in mobile robotics, but this is not necessary. The interface may consist of a conventional wheelchair joystick, or it may be a

I. INTRODUCTION

General methods used for personal transportation are highly mechanised, i.e., they require immense manpower in order to operate. Thus, there is an inevitable need to fuse in automotive electronics to overcome this. Modern transportation vehicles employ dozens of electronic systems. These systems are responsible for operational controls such as the throttle, brake and steering controls; as well as many comfort and convenience systems such as the HVAC, infotainment, and lighting systems. It would not be possible for automobiles to meet modern safety and fuel economy requirements without electronic controls. We can use newer technologies, such as robotics, to make the field of automated technology better and faster. One such technique is to use hand gesture recognisation system which would act as a stimulus to feed as an input to the robot. This can be further enhanced by making it useful to the common people and bringing it down to the public level. This can only be done by making the hand-gesture-recognisation-controlled-machines technique much more firm, easier and user friendly. This technology can be further used in developing small level personal transportation methods (like a personalised Robocar, For example: those manufactured by Segway Motor Works) or this can be even used in the already existing personalised mechanised system or in human-aid- medical

"sip-and-puff" device or a touch-sensitive display connected to a computer. This is different from a conventional motorized or electric wheelchair, in which the user exerts manual control over motor speed and direction via a joystick or other switchor potentiometer-based device, without intervention by the wheelchair's control system.

I.

HAND GESTURE RECOGNISATION SYSTEM

Hand gesture has been one of the most common and natural communication media among human being. Hand gesture recognition research has gained a lot of attentions because of its applications for interactive human-machine interface and virtual environments. Most of the recent works related to hand gesture interface techniques has been categorized as: glove-based method and visionbased method. Glove-based gesture interfaces require the user to wear a cumbersome device, and generally carry a load of cables that connect the device to a computer. There are many vision-based techniques, such as model-based and state-based which have been proposed for locating objects and recognizing gesturers. Recently, there have been an increasing number of gesture recognition researches using vision-based methods. The human hand is a highly deformable articulated object with many degrees of freedom and can through different postures and motions be used for expressing information for various purposes. General tracking and accurate 3D pose estimation would therefore probably require elaborate 3D hand models with time-consuming initialization and updating/tracking procedures. Our aim here is to track a number of well-dened, purposeful hand postures that the user performs in order to communicate a limited set of commands to the computer. This allows us to use a simpler, viewbased shape representation, which will still be discriminatory enough to nd and track a set of known hand postures in complex scenes.

Fig.1 Pranav Mistry working with hand gestures technology

A. Pointing Device Gestures

Also known as Mouse Gestures, it is a way of combining pointing devices movements and clicks which the software recognizes as a specific command. Pointing device gestures can provide quick access to common functions of a program. They can also be useful for people who have difficulties typing on a keyboard. For example, in a web browser, the user could navigate to the previously viewed page by pressing the right pointing device button, moving the pointing device briefly to the left, then releasing the button. The modern touch screens of tablet devices, such as the iPad, utilize multi-touch technology and gestures as a primary part of their interface. Many modern touchpads, which in laptops replace the traditional mouse, have similar gesture support. For example, a common gesture is to use two fingers in a downwards or upwards motion to scroll the currently active page. The rising popularity of touchscreen interfaces has led to gestures becoming a more standard feature in electrical computing technology.

and complete the "volume up" circuit on the circuit board. The integrated circuit detects this. 2) The integrated circuit sends the binary "volume up" command to the LED at the front of the remote. 3) The LED sends out a series of light pulses that corresponds to the binary "volume up" command. Infrared remote controls work well enough to have stuck around for 25 years, but they do have some limitations related to the nature of infrared light. First, infrared remotes have a range of only about 30 feet (10 meters), and they require line-ofsight. This means the infrared signal won't transmit through walls or around corners -- you need a straight line to the device you're trying to control. Also, infrared light is so ubiquitous that interference can be a problem with IR remotes. Just a few everyday infrared-light sources include sunlight, fluorescent bulbs and the human body. To avoid interference caused by other sources of infrared light, the infrared receiver on a TV only responds to a particular wavelength of infrared light, usually 980 nanometers. There are filters on the receiver that block out light at other wavelengths. Still, sunlight can confuse the receiver because it contains infrared light at the 980-nm wavelength. To address this issue, the light from an IR remote control is typically modulated to a frequency not present in sunlight, and the receiver only responds to 980-nm light modulated to that frequency. The system doesn't work perfectly, but it does cut down a great deal on interference.

Fig.2 The mouse gesture for "back" in opera the user holds down the right mouse button, moves the mouse left, and releases the right mouse button.

II. PHOTO DETECTION TECHNOLOGY

The dominant part of this is remote-control technology used majorly in home-theater applications using infrared (IR). Infrared light is also known as plain-old "heat". The basic premise at work in an IR remote control is the use of light to carry signals between a remote control and the device it's directing. Infrared light is in the invisible portion of the electromagnetic spectrum. An IR remote control (the transmitter) sends out pulses of infrared light that represent specific binary codes. These binary codes correspond to commands, such as Power On/Off and Volume Up. The IR receiver in the TV, stereo or other device decodes the pulses of light into the binary data (ones and zeroes) that the device's microprocessor can understand. The microprocessor then carries out the corresponding command.

While infrared remotes are the dominant Pushing a button on a remote control sets in technology in home-theater applications, there are motion a series of events that causes the controlled other niche-specific remotes that work on radio device to carry out a command. The process works waves instead of light waves. If you have a garagedoor opener, for instance, you have an RF (Radio something like this: Frequency) remote. 1) You push the "volume up" button on your remote control, causing it to touch the contact beneath it

is speaking, rather than what they are saying. Recognizing the speaker can simplify the task of translating speech in systems that have been trained on a specific person's voice or it can be used to authenticate or verify the identity of a speaker as part of a security process. These techniques can also be used in controlling everyday mechanical objects like in the Voice controlled real time Operating Systems, user can Fig. 3 Sony TV remotes use a space-coding method in which the length of the interact with real world object through combining spaces between pulses of light represents a one or a zero. the real and binary word by simple speech commands. This is very useful in the field of Moreover, Ultraviolet Light Technology, though Robotics as it simplifies complicated instruction rarely, are also used in controlling electrical devices. inputs manifold by just a voice command.
A. III. SPEECH CONTROL TECHNOLOGY Speech to Data/Output Conversion

In computer science, speech recognition (SR) is the translation of spoken words into text. It is also known as "automatic speech recognition" (ASR), "computer speech recognition", or just "speech to text" (STT). Some SR systems use "speaker independent speech recognition" while others use "training" where an individual speaker reads sections of text into the SR system. These systems analyze the person's specific voice and use it to fine tune the recognition of that person's speech, resulting in more accurate transcription. Systems that do not use training are called "speaker independent" systems. Systems that use training are called "speaker dependent" systems. Speech recognition applications include voice user interfaces such as voice dialing (e.g. "Call home"), call routing (e.g. "I would like to make a collect call"), domestic appliance control, search (e.g. find a podcast where particular words were spoken), simple data entry (e.g., entering a credit card number), preparation of structured documents (e.g. a radiology report), speech-to-text processing (e.g., word processors or emails), and aircraft (usually termed Direct Voice Input).

To convert speech to on-screen text or a computer command, a computer has to go through several complex steps. When you speak, you create vibrations in the air. The analog-to-digital converter (ADC) translates this analog wave into digital data that the computer can understand. To do this, it samples, or digitizes, the sound by taking precise measurements of the wave at frequent intervals. The system filters the digitized sound to remove unwanted noise, and sometimes to separate it into different bands of frequency. It also normalizes the sound, or adjusts it to a constant volume level. It may also have to be temporally aligned. People don't always speak at the same speed, so the sound must be adjusted to match the speed of the template sound samples already stored in the system's memory.

Next the signal is divided into small segments as short as a few hundredths of a second, or even thousandths in the case of plosive consonant sounds -- consonant stops produced by obstructing airflow in the vocal tract -- like "p" or "t." The program then matches these segments to known phonemes in the appropriate language. A phoneme is the smallest element of a language -- a The term voice recognition or speaker representation of the sounds we make and put identification refers to finding the identity of "who" together to form meaningful expressions. There are

roughly 40 phonemes in the English language (different linguists have different opinions on the exact number), while other languages have more or fewer phonemes. The next step seems simple, but it is actually the most difficult task to accomplish and is the focus of most speech recognition research. The program examines phonemes in the context of the other phonemes around them. It runs the contextual phoneme plot through a complex statistical model and compares them to a large library of known words, phrases and sentences. The program then determines what the user was probably saying and either outputs it as text or issues a computer command.

background noise in an office environment. Users should work in a quiet room with a quality microphone positioned as close to their mouths as possible. Low-quality sound cards, which provide the input for the microphone to send the signal to the computer, often do not have enough shielding from the electrical signals produced by other computer components. They can introduce hum or hiss into the signal. 2) Overlapping speech: Current systems have difficulty separating simultaneous speech from multiple users like in an office meeting. 3) Homonyms: Homonyms are two words that are spelled differently and have different meanings but sound the same. "There" and "their" "air" and "heir," "be" and "bee" are all examples. There is no way for a speech recognition program to tell the difference between these words based on sound alone. However, extensive training of systems and statistical models that take into account word context have greatly improved their performance.

IV. WIRELESS CONTROL TECHNOLOGY

A wireless sensor network (WSN) of spatially distributed autonomous sensors to monitor physical Fig.4 An ADC translates the analog waves of your voice into digital data by or environmental conditions, such sampling the sound. The higher the sampling and precision rates, the higher as temperature, sound, pressure, etc. and to the quality. cooperatively pass their data through the network to a main location. The more modern networks are biB. Speech Recognition: Weaknesses and Flaws directional, also enabling control of sensor activity. The development of wireless sensor networks was motivated by military applications such as 1) Low signal-to-noise ratio: battlefield surveillance; today such networks are The program needs to "hear" the words spoken used in many industrial and consumer applications, distinctly, and any extra noise introduced into the such as industrial process monitoring and control, sound will interfere with this. The noise can machine health monitoring, and so on. come from a number of sources, including loud

V. CONCLUSIONS

From the above descriptions, it can be concluded ACKNOWLEDGMENT that there are several new methods and techniques I would like to thank my parents and Mrs. Sunita (be it wireless or cordless or visual or audio or Parihar, without whose help this paper presentation photosensitive) to operate an electrical device would not be possible. through. But the need of the hour is to implement them at ground level, that is, in order to help the REFERENCES common people and make them more familiar to [1] How a voice recognisation technology work/www.howstuffworks.com Chen, Chin-Ming Fu, Chung-Lin Huang, Hand Gesture such technologies by enhancing the existing and [2] Feng-Sheng Recognisation using a real time tracking method and hidden Markov encouraging the existing electrical devices to get models, Institute of Electrical Engineering, University of China. www.quora.com. more out of the new stuff. Generalized gadgets and [3] [4] Real time static and dynamic hand gesture recognisation system, other devices are a firm tool to catalyze the www.mathworks.com gestures and Voice Recognisation Techniques. Available: personalization of enhanced controlling of electrical [5] Hand www.youtube.com devices. [6] How does Remote control Technology work/www.howstuffworks.com

Vous aimerez peut-être aussi