Vous êtes sur la page 1sur 4

Emersive Gaming Utilizing the Concept of Tangimapping in Virtual Reality Systems

Blain Rinehart, 2011

Overview The idea presented here is different from virtual reality, augmented reality, and other forms of altered reality in that it is a new concept that deviates from pursuing artificial reconstruction of tangible objects. We begin by merging two well-known ideas. The first requirement for tangimapping is real time motion capture of objects using active RFID trackers. The second requirement is a replacement of sensory input using stereoscopic HMDs and audio devices (also known as virtual reality). These two ideas have been merged before, such that a users real time hand motion is tracked, and a virtual hand is outputted to a HMD that the user is wearing, such that a virtual representation of their hand is seen in the correct percieved location in space and time by said user. The idea presented uses this same principle, but instead of tracking only the hand, we track the user, other users, objects, and immovable objects within a large game environment. The benefits of this include fully interactive virtual environments (including other users), virtual representations of all objects within the game environment including their actual relative positions, etc. The ability to track all positions of movable objects allows the software to reconstruct the actual game world, as well as add enhancements (such as lighting effects, for example) into the virtual world that the real game environment doesnt include. The ability for users to interact with real objects that are represented in the virtual world adds a dimension of realism that cant be matched by current virtual reality systems. The drawback is an entire game environment must be built and tracked, which goes against the normal desires of virtual reality.

Technology (HMD) One piece of the tangimapping puzzle is the use of alternative sensory input for vision and sound. The head mounted display uses stereoscopic screens, one for each eye, to create the illusion of depth perception. The users virtual cameras in the video game (similar to the first person view a player can see during a video game on a console) should match the actual location of the player in the actual game environment. This will allow the user to match movement correctly as well as feel natural during play. Since the players joint movements are tracked, the virtual camera can move exactly relative to the other tracked markers, so that the user obtains the correct visual output. It is possible the user could switch between different camera views, as to spectate from different areas, but this is not useful unless others outside the game environment would like to see exactly what a user in the game is seeing. The HMD is basically two screens, one for each eye, outputting the rendered real time graphics of a video game. Stereoscopic 3D sound can also be added to enhance gameplay, such that stepping on glass or gun-shots sound realistic. Other noises, such as other users talking to you, might be needed, so having noise enhancement would be ideal.

Technology (motion-capture)

The second part the the tangimapping puzzle deals with tracking the real time positions of RFID trackers in the game environment relative to an absolute reference frame. One benefit of this system is that any non-moving objects in the game dont need to be tracked at all, since their position is always known to be in a specific location. There is a heiarchy that the trackers are tied to, starting with the absolute reference frame. For example, lets say there is a tracker for a weapon, but there is also a moving part on the weapon. Since the moving part of the weapon is dependent on the weapons orientation and position, the moving part is the child entity of the parent weapon, and the weapon is the child entity of the parent absolute reference frame. This way, we only need a one-dimensional tracker for the moving part on the weapon, since the sliding moving part can only move in one direction relative to the weapon, even though the part is actually moving in three dimensions with orientation. We dont need to track it in 3D since we know the 3D position and orientation of its parent entity. This heiarchy can be used for child sliding components of parent entities, such as on weapons, dressers, etc. There only needs to be one orientracker per object that needs tracked for 3D space and orientation, which is good for most game objects. Each orientracker actually consists of three different trackers, so that orientation can be determined. Since the human body has so many components that are all moving relative to each other, its easier to individually track each joint individually with a tracker (not orientracker). A processing computer will be needed to convert each trackers 3D position in space and convert it into virtual components that the virtual representations will be controlled by. This system will basically convert every tracked object in the real world into exact relative component co-ordinates. Since the game doesnt need a physics engine, controllers, animation scripts, etc, it will be fairly easy to

control. The reason it doesnt need these things is because theyre controlled by the tracked objects inplicitly.

Technology (the game) The actual game can be anything, from a virtual walk through of a futuristic city or countryside, to an apacalyptic zombie game. Since we have information on everythings position, as well as the absolute positions of non-moving objects, we can program a tangibility map, one that paints a virtual representation when the game is rendered. The real key in tangimapping (as well call it through out this), is combining the HMD technology with the tracking, that way the replaced sensory input appears to be in exactly the same position as its real counterpart. With this, objects can be interacted with realistically, since it wont seem different than real life. When a player goes to grab an object, everything thats outputted to the screens will be in exactly the right position at that time, so its natural. The game, for example the zombie game, can have all sorts of interesting things that arent possible with other virtual reality systems. Lets use some examples. You and your friend are literally walking through a destroyed city, and you lean on a skyscraper. You can feel the texture of the building, as well as hold your weapon. Since tangimapping enhances the game, the skyscraper is actually a short building (maybe 10 ft tall), but it appears to be massive. There is a sunset, but in reality youre in a large building. You step on glass that you shot out of a tall building window, and can hear the crunch as you step on it. You can watch television, but in reality its just a box. There can be holograms, gunfire, virtual walls, warped sensations and colors, etc. Another interesting thing is the lighting system. Since volumetric lighting and shadows are possible with processing speeds today, your shadows

will add to the realism. You can shine a flashlight on a zombie, and your finger-shadows would be realistic on the zombie.

Vous aimerez peut-être aussi