US20020036617A1 - Novel man machine interfaces and applications - Google Patents

Novel man machine interfaces and applications Download PDF

Info

Publication number
US20020036617A1
US20020036617A1 US09/138,339 US13833998A US2002036617A1 US 20020036617 A1 US20020036617 A1 US 20020036617A1 US 13833998 A US13833998 A US 13833998A US 2002036617 A1 US2002036617 A1 US 2002036617A1
Authority
US
United States
Prior art keywords
computer
camera
target
cameras
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/138,339
Inventor
Timothy R. Pryor
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=22481590&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20020036617(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Individual filed Critical Individual
Priority to US09/138,339 priority Critical patent/US20020036617A1/en
Publication of US20020036617A1 publication Critical patent/US20020036617A1/en
Priority to US11/186,898 priority patent/US7843429B2/en
Priority to US12/700,055 priority patent/US20100134612A1/en
Priority to US12/941,304 priority patent/US8068095B2/en
Priority to US13/267,044 priority patent/US8614668B2/en
Priority to US13/714,693 priority patent/US8760398B2/en
Priority to US13/714,727 priority patent/US8847887B2/en
Priority to US13/850,561 priority patent/US8736548B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0325Detection arrangements using opto-electronic means using a plurality of light emitters or reflectors or a plurality of detectors forming a reference frame from which to derive the orientation of the object, e.g. by triangulation or on the basis of reference deformation in the picked up image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1012Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals involving biosensors worn by the player, e.g. for measuring heart beat, limb activity
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/033Indexing scheme relating to G06F3/033
    • G06F2203/0331Finger worn pointing device

Definitions

  • the invention relates to simple input devices for computers, well suited for use with 3-D graphically intensive activities, and operating by optically sensing object or human positions and/or orientations.
  • the invention in many preferred embodiments, uses real time stereo photogrammetry using single or multiple TV cameras whose output is analyzed and used as input to a personal computer.
  • Iura and Sigel disclose means for using a video camera to look at a operators body or finger and input control information to a computer. Their disclosure is generally limited to two dimensional inputs in an xy plane, such as would be traveled by a mouse used conventionally.
  • Dementhion discloses the use objects equipped with 4 LEDs detected with a single video camera to provide a 6 degree of freedom solution of object position and orientation. He downplays the use of retroreflector targets for this task.
  • Cipolla et al discusses processing and recognition of movement sequence gesture inputs detected with a single video camera whereby objects or parts of humans equipped with four reflective targets or leds are moved thru space, and a sequence of images of the objects taken and processed.
  • the targets can be colored to aid discrimination
  • Pryor one of the inventors, in several previous applications has described single and dual (stereo) camera systems utilizing natural features of objects or special targets including retroreflectors for determination of position and orientation of objects in real time suitable for computer input, in up to 6 degrees of freedom
  • Pinckney has described a single camera method for using and detecting 4 reflective targets to determine position and orientation of an object in 6 degrees of freedom.
  • Another reference relating to use of two or more cameras, is Development of Stereo Vision for Industrial Inspection, Dr. S. F. El-Hakim, Proceedings of the Instrument Society of America (ISA) Symposium, Calgary Alta, Apr. 3-5, 1989. This paper too has several useful references to the photogrammetry art.
  • many embodiments may operate with natural features, colored targets, self-illuminated targets such as LEDS, or with retroreflective targets. Generally the latter two give the best results from the point of view of speed and reliability of detection—of major importance to widespread dissemination of the technology.
  • the disclosed invention is optically based, and generally uses unobtrusive specialized datum's on, or incorporated within, an object whose 3D position and/or orientation is desired to be inputted to a computer.
  • datums are viewed with a single tv camera, or two tv cameras forming a stereo pair.
  • a preferred location for the camera(s) is proximate the computer display, looking outward therefrom, or to the top or side of the human work or play space.
  • Retroreflective glass bead tape, or beading such as composed of Scotchlite 7615 by 3M co., provides a point, line, or other desirably shaped datum which can be easily attached to any object desired, and which has high brightness and contrast to surroundings such as parts of a human, clothes, a room etc, when illuminated with incident light along the optical axis of the viewing optics such as that of a TV camera.
  • This allows cameras to be used in normal environments, and having fast integration times capable of capturing common motions desired, and allows datums to be distinguished easily which greatly reduces computer processing time and cost.
  • Retroreflective or other datums are often distinguished by color or shape as well as brightness. Other target datums suitable can be distinguished just on color or shape or pattern, but do not have the brightness advantage offered by the retro.
  • Suitable Retroreflectors can alternatively be glass, plastic or retroreflective glass bead paints, and can be other forms of retroreflectors than beads, such as corner cubes. But the beaded type is most useful. Shapes of datums found to be useful have been for example dots, rings, lines, edge outlines, triangles, and combinations of the foregoing,
  • Full 3D (up to 6 degrees of freedom, eg x, y, z, roll, pitch, yaw) real time dynamic input using artifacts, aliases, portions of the human body, or combinations thereof
  • the invention has a unique ability to combine what amounts to 3D icons (physical artifacts) with static or dynamic gestures or movement sequences. This opens up, among other things, a whole new way for people, particularly children, beginners and those with poor motor or other skills to interact with the computer. By manipulating a set of simple tools and objects that have targets appropriately attached, a novice computer user can control complex 2D and 3D computer programs with the expertise of a child playing with toys!
  • the invention also acts as an important teaching aide, especially for small children and the disabled, who have undeveloped motor skills. Such persons can, with the invention, become computer literate far faster than those using conventional input devices such as a mouse.
  • the ability of the invention to use any desired portion of a human body, or an object in his command provides a massive capability for control, which can be changed at will.
  • the invention allows one to avoid carpal tunnel syndrome and other effects of using keyboards and mice. One only needs move through the air so to speak, or with ergonomically advantageous artifacts.
  • the system can be calibrated for each individual to magnify even the smallest motion to compensate for handicaps or enhance user comfort or other benefits.(eg trying to work in a cramped space on an airplane). If desired, unwanted motions can be filtered or removed using the invention. (in this case a higher number of camera images than would normally be necessary is typically taken, and effects in some frames averaged, filtered or removed altogether).
  • the invention also provides for high resolution of object position and orientation at high speed and at very low or nearly insignificant cost. And it provides for smooth input functions without the jerkiness of mechanical devices such as a sticking mouse of the conventional variety.
  • the invention can be used to aid learning in very young children and infants by relating gestures of hands and other bodily portions or objects (such as rattles or toys held by the child), to music and/or visual experiences via computer generated graphics or real imagery called from a memory such as DVD disks or the like.
  • the invention is particularly valuable for expanding the value of life-size, near life size, or at least large screen (eg. greater than 42 inches diagonal) TV displays. Since the projection can now be of this size at affordable cost, the invention allows an also affordable means of relating in a lifelike way to the objects on the screen—to play with them, to modify them, and other wise interrelate using ones natural actions and the naturally appearing screen size—which can also be in 3D using stereo display techniques of whatever desired type.
  • FIG. 1 illustrates basic sensing useful in practicing the invention
  • FIG. 1 a illustrates a basic two dimensional embodiment of the invention utilizing one or more retroreflective datums on an object, further including means to share function with normal imaging for internet teleconferencing or other activities.
  • FIG. 1 b illustrates a 3 Dimensional embodiment using single camera stereo with 3 or more datums on an object or wrist of the user.
  • FIG. 1 c illustrates another version of the embodiment of FIG. 1 a, in which two camera “binocular” stereo cameras are used to image an artificial target on the end of a pencil. Additionally illustrated is a 2 camera stereo and a line target plus natural hole feature on an object.
  • FIG. 1 d illustrates a control flow chart of the invention
  • FIG. 1 e is a flow chart of a color target processing embodiment
  • FIG. 2 illustrates Computer aided design system (CAD) related embodiments
  • FIG. 2 a Describes a illustrates a first CAD embodiment according to the invention, and a version for 3-D digitizing and other purposes
  • FIG. 2 b describes another Computer Design embodiment with tactile feedback for “whittling ” and other purposes
  • FIG. 3 illustrates additional embodiments working virtual objects, and additional alias objects according to the invention
  • FIG. 4 illustrates a car driving game embodiment of the invention, which in addition illustrates the use of target-based artifacts and simplified head tracking with viewpoint rotation.
  • the car dash is for example a plastic model purchased or constructed to simulate a real car dash, or can even be a make-believe dash (ie in which the dash is made from for example a board, and the steering wheel from a dish), and the car is simulated in its actions via computer imagery and sounds
  • FIG. 5 illustrates a one or two person airplane game according to the invention, to further include inputs for triggering and scene change via movement sequences or gestures of a player. Also illustrated in FIG. 5 c is a hand puppet game embodiment of the invention played if desired over remote means such as the Internet
  • FIG. 6 illustrates other movements such as gripping or touch which can be sensed by the invention indicating which can be useful as input to a computer system, for the purpose of signaling that a certain action is occurring
  • FIG. 7 illustrates further detail as to the computer architecture of movement sequences and gestures, and their use in computer instruction via video inputs. Also illustrated are means to determine position and orientation parameters with minimum information at any point in time.
  • FIG. 8 illustrates embodiments, some of which are a simulation analog of the design embodiments above, used for Medical or dental teaching and other applications.
  • FIG. 8 a illustrates a targeted scalpel used by a medical student for simulated surgery, further including a compressible member for calculating out of sight tip locations
  • FIG. 8 c illustrates targeted instruments and targeted body model
  • FIG. 8 d illustrates a body model on a flexible support
  • FIG. 8 e illustrates a dentist doing real work with a targeted drill
  • FIG. 8 f shows how a surgeon can control the manipulation of a laproscopic tool or a robot tool through the complex 3D environment of a body with the help of a targeted model of a body as an assembly of body parts.
  • FIG. 8 g is another embodiment
  • FIG. 9 illustrates a means for aiding the movement of persons hands while using the invention in multiple degree of freedom movement
  • FIG. 10 illustrates a natural manner of computer interaction for aiding the movement of persons hands while using the invention in multiple degree of freedom movement with ones arms resting on a armrest of a chair, car, or the like
  • FIG. 11 illustrates coexisting optical sensors for other variable functions in addition to image data of scene or targets.
  • a particular illustration of a Level vial in a camera field of view illustrates as well the establishment of a coordinate system reference for the overall 3-6 degree of freedom coordinate system of the camera(s).
  • FIG. 12 illustrates a touch screen employing target inputs from fingers or other objects in contact or virtual contact with the screen, either of the conventional CRT variety, an LCD screen, or a projection screen—including aerial projection in space. Calibration or other functions via targets projected on the screen is also disclosed.
  • FIG. 13 illustrates clothes design using preferred embodiments incorporating finger touch, laser pointing and targeted material.
  • FIG. 14 illustrates additional applications of alias objects such as those of FIG. 3, for purposes of planning visualization, building toys, and inputs in general.
  • FIG. 15 illustrates a sword play and pistol video game play of the invention using life size projection screens, with side mounted stereo camera and head tracking audio system (and/or tv camera/light source tracker)
  • FIG. 16 illustrates an embodiment of the invention having a mouse and/or keyboard of the conventional variety combined with a targets of the invention on the user to give an enhanced capability even to a conventional word processing or spreadsheet, or other program.
  • a unique portable computer for use on airplanes and elsewhere is disclosed
  • FIG. 17 illustrates a optically sensed keyboard embodiment of the invention, in this case for a piano
  • FIG. 18 illustrates gesture based musical instruments such as violins and virtual object musical instruments according to the invention, having synthesized tones and, if desired, display sequences.
  • FIG. 19 illustrates a method for entering data into a CAD system used to sculpt a car body surface.
  • FIG. 20 illustrates an embodiment of the invention used for patient or baby monitoring
  • FIG. 21 illustrates a simple embodiment of the invention for toddlers and preschool age children, which is also useful to aid learning in very young children and infants by relating gestures of hands and other bodily portions or objects such as rattles held by the child, to music and/or visual experiences.
  • FIG. 22 illustrates the use of a PSD (position sensitive photodiode)based image sensor rather than, or in conjunction with, a tv camera.
  • PSD position sensitive photodiode
  • Two versions are shown, A single point device, with retro-reflective illumination, or with a battery powered LED source, and a multi-point device with LED sources. A combination of this sensor and a TV camera is also described., as is an alternative using fiber optic sources
  • FIG. 23 illustrates inputs to instrumentation and control systems, for example those typically encountered in car dashboards to provide added functionality and to provide an aide to drivers, including the handicapped
  • FIG. 24 illustrates means for simple “do it yourself” object creation using the invention
  • FIG. 25 illustrates a game experience with an object represented on a deformable screen.
  • FIG. 26 illustrates the use of motion blur to determine the presence of movement or calculate movement vectors
  • FIG. 27 illustrates retro-reflective jewelry and makeup according to the invention
  • FIG. 1 a [0108]FIG. 1 a
  • FIG. 1 a illustrates a simple single camera based embodiment of the invention.
  • a user 5 desires to point at an object 6 represented electronically on the screen 7 and cause the pointing action to register in the software contained in computer 8 with respect to that object (a virtual object), in order to cause a signal to be generated to the display 7 to cause the object to activate or allow it to be moved, (eg with a subsequent finger motion or otherwise). He accomplishes this using a single TV camera 10 located typically on top of the screen as shown or alternatively to the side (such as 11 ) to determine the position of his fingertip 12 in space, and/or the pointing direction of his finger 13 .
  • retro-reflective material on the finger disclosed herein as either temporarily attached to the finger as in jewelry or painted on the finger using retro-reflective coating “nail polish” or adhered to the finger such as with adhesive tape having a retro-reflective coating.
  • retro-reflective coatings are typically those of Scotchlite 7615 and its equivalent that have high specific reflectivity, contrasting well to their surroundings to allow easy identification. The brightness of the reflection allows dynamic target acquisition and tracking at lowest cost.
  • the camera system employed for the purposes of low cost desirable for home use is typically that used for Internet video conferencing and the like today. These cameras are CCD's and more recently CMOS, cameras having low cost (25-100 dollars) yet relatively high pixel counts and densities. It is considered that within a few years these will be standard on all computers, for all intents and purposes, “free” to the applications here proposed, and interfaced via “fire wire” (IEEE 1394) or USB (universal serial bus).
  • retroreflective and/or highly distinctive targets allow reliable acquisition of the target in a general scene, and does not restrict the device to pointing on a desktop application under controlled lighting as shown in Sigel or others.
  • Active (self luminous) targets such as LEDS also allow such acquisition, but are more costly, cumbersome and obtrusive and generally less preferable.
  • the first, illustrated in FIGS. 1 a and b is to utilize a single camera, but multiple discrete features or other targets on the object which can provide a multidegree of freedom solution.
  • the target spacing on the object is known apriori and entered into the computer manually or automatically from software containing data about the object, or can be determined through a taught determining step.
  • the second is a dual camera solution shown in FIGS. 1 c and d that does not require a priori knowledge of targets and in fact can find the 3D location of one target by itself, useful for determining finger positions for example.
  • a priori knowledge of targets and in fact can find the 3D location of one target by itself, useful for determining finger positions for example.
  • At least three point, targets are required, although line targets, and combinations of lines and points can also be used.
  • FIG. 1 b illustrates a 3-D (3 Dimensional) sensing embodiment using single camera stereo with 3 or more datums on a sensed object, or in another example, the wrist of the user.
  • object 30 which has at least 3 visible datums 32 , 33 , and 34 which are viewed by TV camera 40 whose signal is processed by computer 41 which also controls projection display 42 .
  • TV camera 40 also views 3 other datums 45 , 46 and 47 , on the wrist 48 of the users left hand, in order to determine its orientation or rough direction of pointing of the left hand 51 , or its position relative to object 30 , or any other data (eg relation to the screen position or other location related to the mounting position of the TV camera, or to the users head if viewed, or what ever.
  • the position and orientation of the object and hand can be determined from the 3 point positions in the camera image using known photogrammetric equations (see Pinckney, reference U.S. Pat. No. 4,219,847 and other references in papers referenced).
  • a colored triangular target for example can be used in which the intersections of lines fitted to its sides define the target datums, as discussed below
  • the finger can be detected just from its general gray level image, and can be easily identified in relation to the targeted wrist location (especially if the user, as shown, has clenched his other fingers such that the finger 52 is the only one extended on that hand).
  • the computer can process the gray level image using known techniques, for example blob and other algorithms packaged with the Matrox brand Genesis image processing board for the PC, and determine the pointing direction of the finger using the knowledge of the wrist gained from the datums. This allows the left hand finger 50 to alternatively point at a point (or touch a point) to be determined on the object 30 held in the right hand as well.
  • FIG. 1 c illustrates another version of the embodiments of FIGS. 1 a and b, in which two camera “binocular” stereo cameras 60 and 61 processed by computer 64 are used to image artificial target (in this case a triangle, see also FIG. 2), 65 , on the end of pencil 66 , and optionally to improve pointing resolution, target 67 on the tip end of the pencil, typically a known small distance from the tip. (the user and his hand holding the pencil is not shown for clarity. This imaging allows one to track the pencil tip position in order to determine where on the paper (or tv screen, in the case of a touch screen) the pencil is contacting. (see also FIG. 2, and FIG. 12).
  • two camera “binocular” stereo cameras 60 and 61 processed by computer 64 are used to image artificial target (in this case a triangle, see also FIG. 2), 65 , on the end of pencil 66 , and optionally to improve pointing resolution, target 67 on the tip end of the pencil, typically a known small distance from the tip. (the
  • the computer can also acquire the stereo image of the paper and the targets in its four corners, 71 - 74 .
  • Solution of the photogrammetric equation allows the position of the paper in space relative to the cameras to be determined, and thence the position of the pencil, and particularly its tip, to the paper, which is passed to display means 75 or another computer program. Even with out the target on the end, the pointing direction can be determined from target 65 and knowing the length of the pencil the tip position calculated
  • a line target 76 can also be useful on the pencil, or a plurality of line targets spaced circumferentially, can also be of use in defining the pencil pointing direction from the stereo image pair.
  • a working volume of the measurement system is shown in dotted lines 79 —that is the region on and above the desk top in this case where the sensor system can operate effectively. Typically this is more than satisfactory for the work at hand.
  • the dual (Stereo pair) camera system of FIG. 1 has been extensively tested and can provide highly accurate position and orientation information in up to 6 degrees of freedom.
  • This provides 30 Hz updates of all 6 axes (x y z roll pitch and yaw) data over a working volume of 0.5 meter ⁇ 0.5 meter in x and y (the desktop, where cameras are directly overhead pointing down at the desk) and 0.35 meters in z above the desk, all to an accuracy of 0.1 mm or better, when used with clearly visible round retroreflective (scotchlite 7615 based) datums approx. 5-15 mm in diameter on an object for example. This is accurate enough for precision tasks such as designing objects in 3D cad systems, a major goal of the invention
  • the cameras in this example are mounted overhead. If mounted to the side or front, or at an angle such as 45 degrees to the desktop, the z axis becomes the direction outward from the cameras.
  • FIG. 1 c additionally illustrates 2 camera stereo arrangement, used in this case to determine the position and orientation of an object having a line target, and a datum on a portion of the user.
  • cameras 60 and 61 are positioned to view a retro-reflective line target 80 in this case running part of the length of a toy sword blade 81 .
  • the line target in this case is made as part of the plastic sword, and is formed of molded in corner cube reflectors similar to those in a tail light reflector on a car. It may also made to be one unique color relative to the rest of the sword, and the combination of the two gives an unmistakable indication.
  • target shape ie a line
  • a line type of target can be cylindrical in shape if wrapped around a cylindrical object, which can be viewed then from multiple angles.
  • This data is calculated in computer 64 , and used to modify a display on screen 75 as desired, and further described in FIG. 15.
  • a matrox genesis frame processor card on an IBM 300 mhz PC was used to read both cameras, and process the information at the camera frame rate of 30 HZ.
  • line targets are very useful on sleeves of clothing, seams of gloves for pointing, rims of hats, and other decorative and practical purposes for example for example outlining the edges of objects or portions thereof, such as holes and openings.
  • the cameras 60 and 61 have magnifications and fields of view which are equal, and overlap in the volume of measurement desired.
  • the axes of the cameras can be parallel, but for operation at ranges of a few meters or less, are often inclined at an acute angle A with respect to each other, so as to increase the overlap of their field of view—particularly if larger baseline distances d are used for increased accuracy (albeit with less z range capability.).
  • A can be 30-45 degrees, with a base line of 0.5 to 1 meter.
  • the angle A and the base line would be less, to allow a larger range of action.
  • the datums on an object can be known a priori relative to other points on the object, and to other datums, by selling or other wise providing the object designed with such knowledge to a user and including with it a CD ROM disc or other computer interfacable storage medium having this data.
  • the user or someone can teach the computer system this information. This is particularly useful when the datums are applied by the user on arbitrary objects.
  • FIG. 1 d [0144]FIG. 1 d
  • steps used in the invention relating to detection of a single point to make a command, in this case, the position (or change of position, ie movement) of a finger tip in FIG. 12 having retroreflective target attached 1202 detected by stereo pair of TV cameras 1210 , using detection algorithm which in its simplest case is based on thresholding the image to see only the bright target indication from the finger (and optionally, any object associated therewith such as a screen to be touched for example).
  • a shape detection step in which a search for a shape is made, such as a circle, ring, triangle, etc.
  • Each step may process only those passing the previous step, or each may be performed independently, and the results compared later.
  • the orders of these steps can be changed but each adds to further identify the valid indication of the finger target.
  • the position of the targeted finger is determined by comparing the difference in location of the finger target in the two camera images of the stereo pair. There is no matching problem in this case, as a single target is used, which appears as only one found point in each image.
  • the computer 8 can be used to analyze incoming TV image based signals and determine which points are moving in the image This is helpful to eliminate background data which is stationary, since often times only moving items such as a hand or object are of interest. In addition, the direction of movement is in many cases the answer desired or even the fact that a movement occurred at all.
  • Motion pre processing is useful when target contrast is not very high, as it allows one to get rid of extraneous regions and concentrate all target identification and measurement processing on the real target items.
  • the range can be determined and if the object is then tracked even if not moving from that point onward, the range measurement gives a good way to lock onto the object using more than just 2 dimensions.
  • Image subtraction or other computer processing operations can also be useful in another sense.
  • One idea is simply to take a picture of a room or other work space, and then bring in the targeted object. That would seem pretty simple to subtract or whatever. And the net result is that any bright features in the space which are not of concern, such as bright door knobs, glasses, etc are eliminated from consideration.
  • FIG. 1 d A flow chart is shown in FIG. 1 d illustrating the steps as follows:
  • G Determine centroid or other characteristic of target point (in this case a retro dot on finger)
  • Latency frequency response, time to get position or orientation answer
  • the following is a multi-degree of freedom image processing description of a triangular shaped color target (disclosed itself in several embodiments of the invention herein) which can be found optically using one or more cameras to obtain the 3 dimensional location and orientation of the target using a computer based method described below. It uses color processing to advantage, as well as a large number of pixels for highest resolution, and is best for targets that are defined by a large number of pixels in the image plane, typically because the target is large, or the cameras are close to the target, or the camera field is composed of a very large number of pixels.
  • the method is simple but unique in that it can be applied 1) in a variety of degrees to increase the accuracy (albeit at the expense of speed), 2) with 1 or more cameras (more cameras increase accuracy), 3) it can utilize the combination of the targets colors and triangles, (1 or more) to identify the tool or object. It utilizes the edges of the triangles to obtain accurate subpixel accruacy. A triangle edge can even have a gentle curve and the method will still function well.
  • the method is based on accurately finding the 3 vertices (F 0 ,G 0 ,F 1 ,G 1 ,F 2 ,G 2 ) of each triangle in the camera field by accurately defining the edges and then computing the intersection of these edge curves rather than finding 3 or 4 points from spot centroids.
  • the preferred implementation uses 1 or more color cameras to capture a target composed of a brightly colored right triangle on a rectangle of different brightly colored background material.
  • the background color and the triangle color must be two colors that are easily distinguished from the rest of the image. For purposes of exposition we will describe the background color as a bright orange and the triangle as aqua.
  • the vertices of the triangle can be found very accurately. If there are more than one triangle on a target, a weighted average of location and orientation information can be used to increase accuracy.
  • the method starts searching for a pixel with the color of the background or of the triangle beginning with the pixel location of the center of the triangle from the last frame. Once a pixel with the triangle “aqua” color is found, the program marches in four opposite directions until each march detects a color change indicative of an edge dividing the triangle and the “orange” background. Next, the method extrends the edges to define three edge lines of the triangle with a least squares method. The intersection points of the resulting three lines are found, and serve as rough estimates of the triangle vertices. These can serve as input for applications that don't require high accuracy.
  • these provisional lines are then used as a starting point for the subpixel refinement process.
  • Each of these 3 lines is checked to see if it is mainly horizontal. If a line is mainly horizontal, then a new line will be determined by fiting a best fit of a curve through the pixel in each column that straddles the provisional line. If a line is mainly vertical, then the same process proceeds on rows of pixels.
  • V WR*CR+WG*CG+WB*CB
  • FIG. 2 a A flow chart is shown in FIG. 2 a
  • This value V is compared with the ideal value U which is equal to the percentage of orangeness calculated assuming the angle of the provisional line is the same as that of the ideal line. For example, a pixel which is crossed by the line in the exact middle would have a U of 0.5, since it is 50% aqua and 50% orange. A fit of U-V in the column (or row) in the vicinity of the crossing of the provisional line gives a new estimate of the location of the true edge crossing. Finally, the set of these crossing points can be fit with a line or gentle curve for each of the three edges and the 3 vertices can be computed from the intersections of these lines or curves.
  • is the focal length and z is the perpendicular distance from the lens to a location on the target.
  • a triangle on the target is initially defined as lying in a plane parallel to the lens plane.
  • the preferred configuration has one right triangle whose right angle is defined at x 0 , y 0 , z 0 with one edge (of length A) extending along the direction of the F axis of the camera and with the other edge (of length B) extending along the direction of the G axis of the camera.
  • the actual target orientation is related to this orientation with the use of Euler Angles ⁇ , ⁇ , ⁇ .
  • the 6 derived data values of the 3 vertices can be used to define 6 values of location and orientaion of the target.
  • the location and orientation of a point of interest on any tool or object rigidly attached to this target can be easily computed from calibration data and ordinary translation and rotation transformations.
  • Refinements to handle lens distortions can be handled by forming a correction function with calibration data that modifies the locations of the F and G data.
  • s 12 A sin( ⁇ )cos( ⁇ )( F 1/ ⁇ sin( ⁇ ) ⁇ cos( ⁇ )
  • s 13 A (sin( ⁇ )( F 1/ ⁇ sin( ⁇ ) ⁇ cos( ⁇ )) ⁇ cos( ⁇ )cos( ⁇ )( F 1/ ⁇ cos( ⁇ ) ⁇ sin( ⁇ )))
  • s 21 A ( G 1/ ⁇ ( ⁇ cos( ⁇ )*cos( ⁇ )+sin( ⁇ )sin( ⁇ )cos( ⁇ ))+sin( ⁇ )sin( ⁇ ))
  • s 22 A cos( ⁇ )( G 1/ ⁇ sin( ⁇ )sin ( ⁇ ) ⁇ cos( ⁇ ))
  • s 32 ⁇ B cos( ⁇ )( F 2/ ⁇ sin( ⁇ ) ⁇ cos( ⁇ ))
  • r 1 ( F 1 ⁇ F 0) z 0/ ⁇ + A ( F 1/ ⁇ (cos( ⁇ )sin( ⁇ )+cos( ⁇ )cos( ⁇ )sin( ⁇ ))sin ⁇ )sin( ⁇ ) ⁇ cos( ⁇ )cos( ⁇ )cos( ⁇ ))
  • Another aspect is digitization of object shapes. There are times that one would like to take a plastic model or a real world part as a starting point for a 3D design. Prior art devices that capture 3D shapes are however, expensive and cumbersome and cannot, like the invention, share their function for replacement of the mouse or 2D graphic tablet.
  • the invention as here disclosed relates physical activities and physical objects directly to computer instructions.
  • a novice user can design a house with a collection of targeted model or “toy” doors, windows, walls etc.
  • By touching the appropriate toy component and then moving and rotating the user's hand she can place the component at the appropriate position.
  • the user can either get his or her visual cue by looking at the position of the toy on the desk or by watching the corresponding scaled view on the computer display.
  • Many other embodiments are also possible.
  • FIG. 2 a [0212]FIG. 2 a
  • This figure illustrates an embodiment wherein the invention is used to “work” on an object, as opposed to pointing or otherwise indicating commands or actions. It is a computer aided design system (CAD) embodiment according to the invention which illustrates several basic principles of optically aided computer inputs using single or dual/multi-camera (Stereo) photogrammetry. Illustrated are new forms of inputs to effect both the design and simulated assembly of objects.
  • CAD computer aided design system
  • CAD Computer Aided Design
  • keystrokes can be replace if desired by voice commands, assuming suitable voice recognition capablity in the computer
  • a touching and indicating device 216 with action tip 217 and multidegree of freedom enabling target 215 that the user holds in her hand.
  • Single targets, or multiple targets can be used with a camera system such as 206 so as to provide up to 6 axis information of pointing device position and orientation vis a vis the camera reference frame, and by matrix transform, to any other coordinate system such as that of a TV display, 220
  • a user can send an interrupt signal from an “interrupt member” (such as pressing a keyboard key) to capture a single target location and orientation or a stream of target locations (ended with another interrupt).
  • An “interrupt member” such as pressing a keyboard key
  • a computer program in computer determines the location and orientation of the target.
  • the location and orientation of the “action tip”: 217 of the pointing device can be computed with simple offset calculations from the location and orientation of the target or target set.
  • the set of tip 217 locations defines the 3D shape of the real world object 205 .
  • Different targeted tools with long or curved extensions to their action tips can be used to reach around the real world object while maintaining an attached target in the target volume so the cameras can record its location/orientation.
  • the user can send location and orientation information to operate a computer program that will deform or modify the shape of the computer model displayed. Note that the user can deform a computer model even if there is no real world object under the tip. The tip location and orientation can always be passed to the computer program that is deforming the computer model.
  • the same device can be used to replace graphic tablets, mice, or white boards, or to be used in conjunction with a display screen, turning into a form of touch screen (as previously, and further discussed herein).
  • Interrupt members can be activated (i.e. a button or keyboard key etc. can be pressed) like mouse buttons.
  • These together with the target ID can initiate a computer program to act like a pen or an eraser or a specific paintbrush or spray can with width or other properties.
  • the other target properties (z, or orientation angles) can be assigned to the computer program's pen, brush or eraser letting the user dynamically change these properties.
  • Target(s) can be attached to a users hand or painted on her nails using retroreflective nail polish paint for example allowing the user to quickly move their hand from the keyboard to allow camera or cameras and computer like that of FIG. 1 to determine the position and orientation in 2D or 3D of a computer generated object on the display, and to set the view direction or zoom, or input a set of computer parameters or computer instructions. This can all be done with the same device that we described in the above figures
  • a major advantage is that this is done without having to grab a mouse or other device. Finger tips can be tracked in order to determine a relative movement such as a grasping motion of the fingers, further described in FIG. 6. Similarly the relation of say one finger, to the nail of the other hand can be seen.
  • Suitable indication can be the nail or natural image of the finger itself if suitable processing time and data processing power is available.
  • results today are expeditiously and economically best achieved by using easily identified, and preferably bright indica such as retroreflective items, brightly colored or patterned items, unusually shaped items or a combination thereof.
  • the computer can both process the optical input and run the computer application software or a group of computers can process the optical data to obtain the location and orientation of the targets over time and pass that information to the application software in a separate computer.
  • the object 205 is shown being digitized with the simple pointer 216 , though it could be different tools that could be used.
  • additional tools which could be used to identify the location and orientation of a 3D object are: a long stemmed pointer to work behind an object, pointers designed to reach into tight spaces, or around features, pointers to naturally slide over round surfaces, or planar corners.
  • the camera system can capture the location and orientation of the target as well as its ID (alternatively one could enter the ID conventionally via a keyboard, voice or whatever.
  • the ID is used to lookup in the associated database the location of the “work tip”.
  • the 3D coordinates can then be passed to the application software to later build the 3D data necessary to create a computer model of the object.
  • the object When working on the back of the object furthest from the cameras, the object may obscure the camera view of the target on the simple tool. Thus the user may switch to the long stem tool or the curved stem tool that are used to get around the blocking geometry of the object. Other pointers can be used to reach into long crevices.
  • activation member This can be any signal to the computer system that it should initiate a new operation such as collect one or more data points, or store the information, or lookup information in the associated databases, etc.
  • the activation member are a button or foot pedal electronically linked to the computer, a computer keyboard whose key is depressed, or a trigger turning on a light or set of lights on a target, or a sound or voice activation.
  • Another method of acquiring a 3D shape is to slide a targeted tool over the object acquiring a continuous stream of 3D coordinates that can be treated as a 3D curve. These curves can later be processed to define the best 3D model to fit these curves.
  • Each curve can be identified as either being an edge curve or a curve on the general body surface by hitting the previously defined keyboard key or other activation member.
  • This method is extremely powerful for capturing clay modeling as the artist is performing his art. In other words, each sweep of his fingers can be followed by recording the path of a target attached to his fingers.
  • the target ID is used to lookup in the associated database the artists finger width and the typical deformation that his fingers experience on a sweep. He can change targets as the artwork nears completion to compensate for a lighter touch with less deformation.
  • FIG. 2 b illustrates how targeted tools can be used in a CAD system or other computer program.
  • a targeted work tool can be a toy model of the real world tool 280 (a toy drill for example) or the tool itself 281 (a small paint brush) helping the user immediately visualize the properties of the tool in the computer program.
  • any targeted tool can be “aliased” by another tool.
  • the tip of the brush could be redefined inside the computer program to act like the tip of a drill.
  • the location and orientation of the drill tip as well as the drill parameters such as its width can be derived from the target and together with its path and interrupt member information.
  • the user can operate his CAD system as though he were operating a set of workshop or artist tools rather than traversing a set of menus.
  • the work tool and an object to be worked on can be targeted, and sensed either simultaneously or one after the other. Their relative locations and orientations can be derived allowing the user, for example, to “whittle” her computer model of the object 285 that she has in one hand with the tool 286 that is in the other hand.
  • a set of objects that are part of a house design process such as a door, a window, a bolt or a hinge could be defined quickly without having the user traverse a set of menus.
  • This device can perform an extremely broad range of input tasks for manipulation of 2D or 3D applications.
  • the devices that are used today for such activity are typically a mouse or a graphic tablet. Both of these devices really tend to work only in two dimensions. People has had the experience with the mouse where it slips or skips over the mouse pad making it difficult to accurately position the cursor.
  • the graphic tablet is somewhat easier to manipulate but it is bulky, covering up the desktop surface.
  • the disclosed invention can replace either of these devices. It never gets stuck since it moves in air. We can attach a target to the top of one of our hands or paint our fingernails and have them act as a target. Alternatively, for example we can pickup a pointing device such as a pencil with a target attached to the top of it.
  • a pointing device such as a pencil with a target attached to the top of it.
  • Targets can also be attached to objects, tools, and hands. Commands can be entered by voice, buttons, other member manipulations, or even by the path of a target itself.
  • a user can acquire 3D model (plastic, clay, etc.) by hitting the C key and either rub targeted fingers or a hand-held targeted sculpture tool over the model. From the path of the targeted fingers or tool we can compute the surface by applying the offset characteristics of the targeted too. If the 3D object is made of a deformable material such as clay, the CAD system can reflect the effect of the fingers or tool passing over the model on each passes. If we want we can add some clay on top of the model to build up material where we need it. Thus we can tie art forms such as clay modeling directly into CAD or other computer systems.
  • FIG. 3 illustrates additional embodiments working virtual objects, and additional alias objects according to the invention.
  • a first object can be a pencil, with the Second object a piece of paper. It also illustrates how we can use of computer image determined tool position and orientation(targeted or otherwise) to give the user tactile and visual feedback as to how the motion, location, and orientation of the tool will affect the application computer program.
  • the user of the computer application program may have several tools that she feels comfortable with on her desk.
  • An artist for instance might have a small paintbrush, a large paintbrush, a pen, an eraser, and a pencil. Each of these would have a unique target attached to it. The artist would then pick up the tool that she would normally use and draw over the surface of a sheet of paper or over the surface of display screen or projection of computer display.
  • the application software would not only trace the path of the tip of the targeted work tool, but also treat the tool as though it were a pen or paintbrush etc. The exact characteristics of the pen would be found in the associated database using the target ID has a lookup key. Extra parameters such as the width of the line, its color, or whether it's a dashed line could be determined by keyboard input or by applying the height, or target orientation parameters.
  • This invention gives the user the natural tactile and visual feedback that she is used to and her art.
  • an artist would use targeted versions of the very tools such as pens 306 , paintbrushes 305 , and erasers 310 that she uses without a computer.
  • [0278] 2 By drawing with a targeted tool (eg 336 , having target 337 ) on a paper pad (eg. 350 shown in FIG. 3 b, with target 342 ) or canvas, the user again continues to experience the traditional noncomputer art form as a computer interface. (targets in multiple corners of the paper can also be used for added resolution of paper location with respect to the tool) The user would see her art drawn on the paper while creating a computer version with all of the editing and reproduction capabilities implied by computers. The targeted tool's motion relative to the targeted paper is what determines the line in the graphics system. Thus the user could even put the pad in her lap and change her position in a chair and properly input the graphic information as she draws on the paper as long as the targets continue to be in the view of the camera system.
  • a targeted tool eg 336 , having target 337
  • a paper pad eg. 350 shown in FIG. 3 b, with target 342
  • canvas By drawing with a targeted tool (eg. 350 shown in FIG. 3 b
  • Parameters such as line width, or line type, etc. can be controlled by the target parameters that are not used to determine the path of the line (usually this would be the target height and orientation).
  • This invention allows the user to “alias” any object with any other object.
  • This invention allows users to control computer programs by moving targeted objects around inside the target volume rather than having to learn different menu systems for you each software package.
  • a child could quickly learn how to create 3D CAD designs by moving targeted toy doors 361 , windows 362 , drills 360 , and pencils.
  • a user would create a hole in an object the same way on different CAD systems by moving say a tool such as a drill starting at the proper location and orientation and proceed to the proper depth.
  • the Quant might be stored as a compact set of 7 numbers and letters (4, 1, 2, 3, 4, a, 27) where 4 is the number of path segments, 1-4 are number that identify path segment directions (i.e. right, up, left, down), “a” is the member interrupt (the key press a), and 27 is the target ID.
  • FIG. 7 a illustrates a flow chart as to how target paths and Quants can be defined.
  • FIG. 4 illustrates a car driving game embodiment of the invention, which in addition illustrates the use of target-based artifacts and simplified head tracking with viewpoint rotation.
  • the car dash is for example a plastic model purchased or constructed to simulate a real car dash, or can even be a make-believe dash (ie in which the dash is made from for example a board, and the steering wheel from a wheel from a wagon or other toy,—or even a dish), and the car is simulated in its actions via computer imagery and sounds
  • Cameras 405 and 406 forming a stereo pair, and light sources as required are desirably mounted on rear projection TV 409 , and are used together with computer 411 to determine the location and orientation of the head of a child or other game player.
  • the computer provides from software a a view on the screen of TV 409 (and optionally sound, on speakers 413 and 414 ) that the player would see as he turns his head—eg right, left, (and optionally, up, down—not so important in a car game driven on horizontal plane, but important in other games which can be played with the same equipment but different programs).
  • This viewpoint rotation is provided using the cameras to determine the orientation of the head from one or more targets 415 attached to the players head or in this case, a hat 416 .
  • target 420 on the steering wheel which can be seen by stereo pair of cameras 405 and 406 .
  • the target moves in a rotary motion which can be transduced accordingly, or as a compound x and y motion by the camera processor system means in computer 411 .
  • the target 420 can alternately be attached to any object that we chose to act as a steering wheel 421 such as the wheel of a child's play dashboard toy 425 .
  • a prefabricated plywood or plastic molded for dash board can be supplied having other controls incorporated, eg gas pedal 440 hinged at bottom with hinge 441 , and preferably providing an elastic tactile feedback, has target 445 viewed by cameras 405 and 406 such that y axis position and/or z axis(range) changes as the player pushes down on the pedal. This change is sensed, and determined by TV based stereo photogrammetry using the cameras and computer, which data is then converted by computer 412 into information which can be used to modify the display or audio signals providing simulations of the cars acceleration or speed depicted with visual and auditory cues.
  • a brake pedal or any other control action can be provided, for example moving a dashboard lever such as 450 sideways (moving in this case a target on its rear facing the camera not shown for clarity, in x axis motion), or turning a dashboard knob such as 455 (rotating a target, not shown, on its rear facing the camera)
  • One camera system (single or stereo pair or other) can be used to follow all of the targets at once or several camera systems can follow separate targets.
  • This invention can turn toys or household objects into computer controls or game controls. This is most easily accomplished by attaching one or more special targets to them, though natural features of some objects can be used.
  • the invention allows simplified head tracking with viewpoint rotation.
  • FIG. 4 Some further detail on the embodiment of FIG. 4, wherein a boy is seated in front of a low cost plastic or plywood dashboard to which a targeted steering wheel and gas and brake pedal is attached (also gear shifts, and other accessories as desired).
  • a target on the boys hat is observed, as are the targets on the individual items of the dash, in this case by stereo pair of cameras located atop the TV display screen, which is of large enough size to seem real-for example, the dash board width is preferable.
  • Retro-reflective tape targets of scotch light 7615 material are used, illuminated by light sources in close adjacency to each camera.
  • a TV image of the boy's face can also be taken to show him at the wheel, leaning out the window (likely imaginary)etc.
  • the boy can move his head from left to right and the computer change the display so he sees a different view of his car on the track, and up and down, to move from driver view of the road, to overhead view of the course, say.
  • Stereo cameras may be advantageously located on a television receiver looking outward at the back of an instrument panel, having targeted levers and switches and steering wheel, etc. whose movement and position is determined along with that of the player, if desired.
  • the panel can be made out of low cost wood or plastic pieces.
  • the player can wear a hat with targets viewed—same field of view as ins. Panel—this allows all data in one view. As he moves his head to lean out the car window so to speak, the image on screen moves view (typically in an exaggerated manner, like a small angular head movement, might rotate the view 45 degrees in the horizontal or vertical direction on the screen.).
  • This invention allows one to change the game from cars to planes just by changing the low cost plastic or wood molded toy instrument panel with its dummy levers, switches, sliders, wheels, etc.
  • These actuating devices are as noted desirebly for easiest results, targeted for example by high visibility and of accurately determinable position, retroreflector or led targets.
  • the display used can be that of the TV, or separately incorporated (and preferably removable for use in other applications), as with an LCD (liquid crystal display) on the instrument panel. Multi-person play is possible, and can be connected remotely.
  • a change in x and/or y can be taught to the system to represent the range of gas pedal positions, by first engaging in a teach mode where one can as shown in FIG. 4 input a voice command to say to the system that a given position is gas pedal up, gas pedal down (max throttle) and any position in between.
  • the corresponding image positions of the target on the gas pedal lever member re recorded in a table and looked up (or alternatively converted to an equation) when the game is in actual operation so that the gas pedal input command can be used to cause imagery on the screen (and audio of the engine, say)to give an apparent speedup or slowing down of the vehicle.
  • the wheel can be turned right to left, with similar results, and the brake pedal lever and any other control desired can also be so engaged. (as noted below, in some cases such control is not just limited to toys and simulations and can also be used for real vehicles)
  • the position, velocity, and rate of change of targeted member positions can also be determined, to indicate other desirable information to the computer analyzing the tv images.
  • this aspect of the invention allows one to create 3D physical manifestations of instruments in a simulation form, much as National Instruments firm has pioneered two dimensional TV screen only displays.
  • an “instrument panel” can also be used to interact with conventional programs—even word processing, spreadsheets and the like where a lever moved by the user might shift a display window on the screen for example.
  • a selector switch on the panel can shift to different screens altogether, and so forth.
  • FIG. 4 has also illustrated the use of the invention to create a simple general-purpose visual and tactile interface to computer programs.
  • FIG. 5 a illustrates a one-person game where a targeted airplane model 505 can be used to define the course of an airplane in a game.
  • the orientation of the plane, determined from targets 510 , 511 , and 512 (on the wings and fuselage respectively) by camera(s) 530 is used by program resident in computer 535 to determine its position and orientation, and changes therein due to movement in the game.
  • the model can be purchased pre targeted (where natural features such as colored circles or special retroreflectors might be used for example).
  • the planes position and/ orientation or change therein is used as an input to a visual display on the computer display and audio program to provide realistic feeling of flight—or alternatively to allow the computer to stage a duel, wherein an the opposing fighter is created in the computer and displayed either alone, or along with the fighter represented by the player. It is particularly enhanced when a large screen display is used, for example >42 inches diagonal.
  • FIG. 5 b A two person version in shown in FIG. 5 b where the two computers can be linked over the internet or via a cable across the room.
  • the two-person game airplane 510 is targeted 511 and the motion is sent over a communication link 515 to a second computer where another player had her airplane 520 with its target.
  • the two results can be displayed on each computer display allowing the users to interactively modify their position and orientation.
  • An interrupt member can trigger the game to fire a weapon or reconfigure the vehicle.
  • a set of targets 514 can even be attached (eg with velcro, to his hands or wrists, and body or head) to the player 513 allowing her to “become” the airplane as he moves around in the front of the cameras. This is similar to a child today, pretending to be an airplane, with arms outstretched. It is thus a very natural type of play, but with exciting additions of sounds and 3D graphics to correspond to the moves made.
  • a plane representation such as an F16 on the screen can also bank.
  • the guns can fire.
  • FIG. 5 c Also illustrated in FIG. 5 c is a one or multi-person “Big Bird” or other hand puppet game embodiment of the invention played if desired over remote means such as the Internet. It is similar to the stuffed animal application described above, except that the players are not in the same room. And, in the case of the Internet, play is bandwidth limited, at least today.
  • Child 530 plays with doll or hand puppet 550 , for example Sesame Streets' “Big Bird”, can be targeted using targets 535 and 540 on its hands 551 and 552 and curvilinear line type target 553 and 554 outlining its upper and lower lips (beak).
  • Target motion sensed by stereo pair of cameras 540 and 541 is transformed by computer 545 into signals to be sent over the internet 555 or through another communication link to allow a second child 556 to interact, moving his doll 560 with say at least one target 561 .
  • the invention comprehends a full suite of up to 6 degrees of freedom gesture type inputs, both static, dynamic, and sequences of dynamic movements.
  • FIG. 6 illustrates other movements such as gripping or touch indicating which can be useful as input to a computer system.
  • Parts of the user, such as the hands can describe motion or position signatures and sequences of considerable utility
  • This really is a method of signaling action to computer using Detected position of one finger, two fingers of one hand, one finger of each hand, two hands, or relative motion/position of any of the above with respect to the human or the computer camera system or the screen (itself generally fixed with respect to the camera system)
  • the program of computer 630 recognizes this motion of fingernails 635 and 636 seen by cameras 640 and 641 connected to the computer which processes their image, as a pinch/grasp motion and can either cause the image of the cow to be compressed graphically, or if the hand is pulled away with in a certain time, it is a interpreted to be a grasp, and the cow object is moved to a new location on the screen where the user deposits it, for example at position 650 (dotted lines). Or it could be placed “in the trash”
  • a microphone 655 can be used to input voice commands into the computer 630 which can then using known technology (dragon software, IBM via voice, etc) be used to process the command.
  • a typical command might be grip, move, etc, if these werent obvious from the detected motion itself.
  • speakers 660 controlled by the computer can give back data to the user such as a beep when the object has been grasped.
  • a beep when the object has been grasped.
  • the Scale of grip of fingers depends on range from screen (and object thereon being gripped) desirably has a variable scale factor dependent on detected range from the sensor (unless one is to always touch the screen or come very near it to make the move)
  • Pinching or Gripping is very useful in combination with voice for word processing and spreadsheets.
  • FIG. 7 (block diagram)
  • FIG. 7 illustrates the use of this invention to implement an optical based computer input for specifying software program commands, parameters, define new objects or new actions in an application computer program, temporarily redefine some or all of the database associated with the target or call specific computer programs, functions, or subroutines.
  • a sequence of simple path segments of the targets obtained by this invention separated by “Quant punctuation” together with its interrupt member settings and its target ID can define a unique data set.
  • This data set we refer to this data set as a “Quant” referring to the discrete states (much like quantum states of the atom).
  • the end of each path segment is denoted with a “Quant punctuation” such as radical change in path direction or target orientation or speed or the change in a specific interrupt member or even a combination of the above.
  • the path segments are used to define a reduced or quantized set of target path information.
  • a Quant has an associated ID (identification number) which can be used as a look-up key in an associated database to find the associated program commands, parameters, objects, actions, etc. as well as the defining characteristics of the Quant.
  • the pause analyze the vector direction for the start of the path segment initiated with the last pause and ending with the next pause.
  • the first and last point of this segment define a vector direction that is mainly upward with no significant left/right or in/out component. Identify this a direction 2.
  • the pause analyze the vector direction for the start of the path segment initiated with the last pause and ending with the next pause.
  • the first a last point of this segment define a vector direction that is mainly to the left with no significant up/down or in/out component. Identify this a direction 3.
  • the pause analyze the vector direction for the start of the path segment initiated with the last pause and ending with the next pause.
  • the first and last point of this segment define a vector direction that is mainly down with no significant left/right or in/out component. Identify this a direction 4.
  • the Quant might be stored as a compact set of 7 numbers and letters (4, 1, 2, 3, 4, a, 27) where 4 is the number of path segments, 1-4 are number that identify path segment directions (i.e. right, up, left, down), “a” is the member interrupt (the key press a), and 27 is the target ID.
  • FIG. 7 a illustrates a flow chart as to how target paths and Quants can be defined.
  • the continuous circular sweep rather than punctuated segments might define a circle command in a CAD system. Some Quants might immediately initiate the recording of another Quant that provides the information needed to complete the prior Quant instruction.
  • This method can be applied to sculpture where the depth of a planar cut or the whittling of an object can be determined by the characteristics of the targeted object's path (in other words by it's Quant).
  • FIG. 8 illustrates the use of this invention for medical applications.
  • a user can apply this invention for teaching medical and dental students, or controlling robotic equipment used for example in medical and dental applications.
  • this invention can be used to give physically controlled lookup of databases and help systems.
  • a scalpel has two targets 801 , and 802 (in this case triangular targets) allowing a 6 degree of freedom solution of the position and orientation of a scalpel 811 to which it is attached, having a tip 815 .
  • targets 801 , and 802 in this case triangular targets
  • Other surgical instruments can also be used, each with their own unique targets and target ID's, if desired, to allow their automatic recognition by the electro-optical sensing system of the invention.
  • the figure shows a medical student's hand 820 holding a model of a surgical instrument, a scalpel.
  • a model of a body can be used to call up surgical database information in the computer attached to the camera system about the body parts in the vicinity of the body model 825 being touched. If the targeted tool is pressed down compressing the spring 810 and moving the targets 801 and 802 apart, the information displayed can refer to internal body parts. As the user presses down harder on the spring, the greater the targets move apart the lower in the body and this can be used to instruct the database to display the computer that we reach for information.
  • the user wants to look up information on drugs that are useful for organs in a given region in the body he might use a similar model syringe with a different target having a different ID.
  • a medical (or dental) student could be tested on his knowledge of medicine by using the same method to identify and record in the computer location on the body that is the answer to a test question.
  • the location and orientation of the targeted tool can be used to control the path of a robotic surgery tool.
  • each surgical device has its own unique target and its own unique target ID.
  • One of the unique features of this invention is that the user can use the fact surgical tool that he uses normally in the application of his art. Thus, a dental student can pick up a standard dental drill and the target can be attached to a dental drill that has the same feel as an ordinary drill.
  • FIG. 8 b show how several objects can be attached to specialized holders that are then attached to a baseboard to create a single rigid collection whose location and orientation can be preregistered and stored in a computer database such that only a single targeted pointer or tool need be tracked.
  • the baseboard has one or more specialized target attachment locations.
  • baseboard/holder attachments fixed (such as pegboard/hole) or freeform (using for example magnets or velcro).
  • Charts 8 d and 8 e describe how these might be calibrated.
  • Attachable targets can be used to pre-register the location and orientation of 1 or more objects relative to a camera system and to each other using a baseboard 839 shown here with square pegs 837 and an attachment fixture 838 that will hold a specialized target such as those shown as 855 , 856 , 857 .
  • a set of objects here shown as a model of a body 840 and a model of a heart 841 with attachment points 842 and 843 that are attached to object holders 845 and 846 at attachment points 847 and 848 .
  • the object holders can be of different shapes allowing the user to hold the object at different orientations and positions as desired.
  • Each object holder has an attachment fixture 850 and 851 that will hold a specialized target.
  • Chart 8 d and 8 e describe the calibration process for a fixed and freeform attachment implementation respectively. Once the baseboard and targets have been calibrated, a computer program can identify which object is being operated on and determine how this information will be used. The steps for utilizing this system is described in Chart 8 f.
  • FIG. 8 c illustrates a dentist with a targeted drill and a target attached to a patients teeth can have the computer linked to the camera system perform an emergency pull back of the drill if a patient sneezes.
  • the output of the sensed condition such as hand or feet position
  • FIG. 9 illustrates a means for aiding the movement of persons hands while using the invention in multiple degree of freedom movement
  • a joy stick is often used for game control. Shown in FIG. 9 a is a joystick 905 of the invention having and end including a ball, 910 , in which the data from datums on the ball position at the end of the stick is taken optically by the video camera 915 in up to 6 axes using a square retroreflective target 920 on the ball.
  • the stick of this embodiment itself, unlike other joysticks is provided not as a transduction device, but to support the user. Alternatively some axes can be transduced, eg. with LVDTS or resolvers, while data in other axes is optically sensed using the invention.
  • one object may be is held in each hand. or one can use two joysticks as above, or one stick aide as shown here, one hand free., for example.
  • FIG. 9 b shows an alternate to a joystick, using retroreflective material targets attached to fingers 930 , 931 and 932 resting on a floating pad 935 resting on a liquid 940 in a container 945 .
  • the floating pad gives comfortable support to the hand while freely allowing the targeted hand to move and rotate.
  • FIG. 9 c shows another more natural way to use this invention in a way that would eliminate Carpal Tunnel syndrome.
  • FIG. 10 illustrates a natural manner of computer interaction for aiding the movement of persons hands while using the invention in multiple degree of freedom movement with ones arms resting on a armrest of a chair, car, or the like
  • user 1005 sitting in chair 1010 has his thumb and two fingers on both hands 1011 and 1012 targeted with ring shaped retroreflector bands 1015 - 1020 as shown. All of the datums are seen with stereo TV camera pair 1030 and 1031 on top of display 1035 driven by computer 1040 which also processes the tv camera images.
  • one hand can hold an object, and the user can switch objects as desired, in one or both of his hands, to suit the use desired, as has been pointed out elsewhere in this application.
  • the armrest itself may contain other transducers to further be used in conjunction with the invention, such as force sensors and the like.
  • This figure illustrates an embodiment wherein other variable functions in addition to image data of scene or targets are utilized.
  • added variables can be via separate transducers interfaced to the computer or desirably provided by the invention in a manner to coexist with the existing TV camera pickups used for position and orientation input.
  • a particular illustration of a level vial in a camera field of view illustrates as well the establishment of a coordinate system reference for the overall 3-6 degree of freedom coordinate system of camera(s).
  • level vial 1101 located on the object 1102 is imaged by single camera 1140 along with the object, in this case having a set of 3 retro-reflective targets 1105 - 1107 , and a retro-reflector 1120 behind the level vial to aid in return in light from near co-axial light source 1130 therefrom (and particularly the meniscus 1125 ) to camera 1140 , used both for single camera photogrammetry to determine object position and orientation, but as well to determine the level in one or two planes of the object with respect to earth.
  • the level measuring device such as a vial, inclinometer, or other device can also be attached to the camera and with suitable close-up optics incorporated therewith to allow it to be viewed in addition to the scene.
  • the camera pointing direction is known with respect to earth or whatever is used to zero the level information which can be very desirable.
  • This figure illustrates a touch screen constructed according to the invention employing target inputs from fingers or other objects in contact with the screen, either of the conventional CRT variety, or an LCD screen, or a projection screen—or virtual contact of an aerial projection in space.
  • a user 1201 with targeted finger 1203 whose position in 3D space relative to TV screen 1205 (or alternatively absolute position in room space) is observed by camera system 1210 comprising a stereo pair of cameras (and if required light sources) as shown above.
  • camera system 1210 comprising a stereo pair of cameras (and if required light sources) as shown above.
  • the system reads the xy location, in the xy plane of the screen, for example.
  • Target datum's on the screen either retro-reflectors or LED's say at the extremities, or projected on to the screen by electron guns or other light projection devices of the TV system can be used to indicate to, or calibrate the stereo camera system of the invention to the datum points of interest on the screen.
  • calibration datum's 1221 - 1224 are shown projected on the screen either in a calibration mode or continuously for use by the stereo camera system which can for example search for their particular color and/or shape. These could be projected for a very short time (eg one 60 hz TV field), and synched to the camera, such that the update in calibration of the camera to the screen might seem invisible to the user.
  • a specially targeted or natural finger can be used with the invention, or an object both natural (eg a pencil point) or targeted (a pencil with a retroreflector near its tip, for example, ) can be used.
  • the natural case is not as able to specifically define a point however, due to machine vision problems in defining its position using limited numbers of pixels often available in low cost cameras.
  • the retro-reflector or LED target example is also much faster, due to light power available to the camera system, and the simplicity of solution of its centroid for example.
  • the screen may itself provide tactile feel.
  • the object removing the material could be a targeted finger or other object such as a sculpture tool.
  • the actual removal of material could be only simulated, given a deformable screen feel, or with no feel at all, if the screen were rigid.
  • the object on which the projection is displayed need not be flat like a screen, but could be curved to better represent o conform to the object shape represented or for other purposes.
  • FIG. 12 The embodiment of the invention of FIG. 12 can be further used for computer aided design particularly with large screens which can give life size images, and for use with life size tools and finger motion.
  • the use of inputs herein described, as with respect to the figure above, is expected to revolutionize computer aided design and related fields in the sense of making computer use far more intuitive and able to be used effectively by populace as a whole.
  • the computer 1240 taking the data from stereo pair of tv cameras 1210 can cause the TV screen to display the car undercarriage life size, or if desired to some other scale.
  • the designer can look for interferences and other problems as if it were real, and can even take a real physical part if desired, such as a pipe or a muffler, and lay it life size against the screen where it might go, and move the other components around “physically” with his hand, using his hand or finger tracked by the tv camera or cameras of the system as input to the corresponding modification to the computer generated image projected.
  • the invention has the ability to focus ones thoughts to a set of motions—fast, intuitive and able to quickly and physically relate to the object at hand. It is felt by the inventors that this will materially increase productivity of computer use, and dramatically increase the ability of the computer to be used by the very young and old.
  • the screen should be a high definition TV (HDTV) such that a user looking on side sees good detail and can walk over to another side and also see good detail.
  • HDMI high definition TV
  • FIG. 13 another useful big screen design application in full size is to design a dress on a model.
  • the use of the big screen allows multiple people to interact easily with the task, and allows a person to grip portion of the prototype dress on the screen, and move it elsewhere (in this case finger tips as targets would be useful). It also allows normal dress tools to be used such as targeted knife or scissors
  • Illustrated is clothing design using finger touch and targeted material.
  • the invention is useful in this application both as a multi-degree of freedom input aide to CAD as disclosed elsewhere herein, and for the very real requirement to establish the parameters of a particular subject (a customer, or representative “average” customer, typically) or to finalize a particular style prototype.
  • the object in this case a human or manikin, with or without clothes, can be digitized, for the purpose of planning initial cutting or sewing of the material. This is accomplished using the invention using a simple laser pointer. It is believed that some similar ideas have been developed elsewhere, using projection grids, light stripes or the like. However, the digitization of the object can be accomplished at very low cost as described below using the multicamera stereo vision embodiment of the invention.
  • the cloth itself can be targeted, and the multicamera stereo acquired target data before tryout and/or the distorted data (such as position, location or shape) after tryout determined, and modifications made, using this data to assist in modifying the instant material or subsequent material desired.
  • Such instruction to the computer can for example be by voice recording (for later analysis, for example) or even instant automatic voice recognition.
  • it can be via some movement such as a hand movement indication she makes which can carry pre-stored and user programmable or teachable meaning to the computer (described also in FIG. 7 above and elsewhere herein).
  • a hand movement indication she makes which can carry pre-stored and user programmable or teachable meaning to the computer (described also in FIG. 7 above and elsewhere herein).
  • moving her finger 1310 up and down in the air may be sensed by the camera and discerned as a signal of letting out material vertically. A horizontal wave, would be to do it horizontally.
  • she might hold an object with a target on her other hand, and use it provide a meaning.
  • she can make other movements which can be of use as well.
  • pinching her fingers which could be targeted for ease of viewing and recognition, she could indicate taking up material (note she can even pinch the material of a prototype dress just as she would in real life).
  • the model could alternatively point a laser pointer such as 1320 with spot 1321 at the point on herself needed, the 3D coordinates of the laser designated being determined by the stereo cameras imaging the laser spot. This too can be with a scanning motion of the laser to obtain multiple points. Other zones than round spots can be projected as well, such as lines formed with a cylinder lens. This allows a sequence of data points to be obtained from a highly curved area without moving the laser, which can cause motion error. Alternatively, she could use a targeted object, such as a sissors or ruler to touch herself with, not just her finger, but this not as physically intuitive as ones own touch.
  • a microphone 1340 may be used to pick up the models voice instruction for the computer. Since instruction can be made by the actual model trying on the clothes, others need not be present. This saves labor to effect the design or modification input, and perhaps in some cases is less embarrassing. Such devices might then be used in clothing store dressing rooms, to instruct minor modifications to other wise ready to wear clothes desired for purchase.
  • a laser pointer can have other uses as well in conjunction with the invention.
  • a designer can point at a portion of a model, or clothes on the model and the system can determine where the point falls in space, or relative to other points on the model or clothes on the model (within the ability of the model to hold still).
  • the pointer can also be used to indicate to the computer system what area is in need of work, say by voice, or by the simple act of pointing, with the camera system picking up the pointing indication.
  • the pointer can project a small grid pattern (crossed lines, dot grid, etc.) or a line or a grille (parallel lines) on the object to allow multiple points in a local area of the object o be digitized by the camera system.
  • Such local data say in a portion of the breast area, is often all that is needed for the designer.
  • pointer projector 1350 projecting a dot grid pattern of 5 ⁇ 5 or 25 equally spaced spots 1355 (before distortion in the camera image caused by curvature of the object) on a portion of bra 1360 , with the spot images picked up by the stereo cameras over not too curved areas is not too difficult.
  • the points cannot be machine matched in the two stereo camera images by the computer program, such matching can be done manually from a TV image of the zone.
  • different views can also be taken for example with the model turning slightly which can aid matching of points observed.
  • added cameras from different directions can be used to acquire points.
  • FIG. 14 illustrates additional applications of alias objects such as those of FIG. 3, for purpose of planning visualization, building toys, and inputs in general.
  • a user in this case a child, 1401 , desires to build a building with his blocks, such as 1410 - 1412 (only a few of his set illustrated for clarity). He begins to place his blocks in front of camera or cameras of the invention such as cameras 1420 and 1421 which obtain stereo pair of images of points on his blocks which may be easily identified such as corners, dot markings, such as those shown, (which might be on all sides of the blocks) etc, and desirably are retro-reflective or otherwise of high contrast. Rectangular colored targets on rectangular blocks is a pleasing combination.
  • images of a building can be made to appear via software running in computer 1440 , based on inputs from cameras 1420 and 1421 shown here located on either side of TV screen 1430 .
  • These images can be in any state of construction, and can be any building, e.g. the Empire State building, or a computer generated model of a building. Or by changing software concerning the relevant images to be called up or generated, he could be building a ship, a rocket, or whatever.
  • Such an arrangement of plurality of objects can be used for other purposes, such as for physical planning models in 3D as opposed to today's computer generated PERT charts, Gant charts, and organization charts in 2D.
  • Each physical object, such as the blocks above can be coded with its function, which itself can be programmable or selectable by the user.
  • some blocks can be bigger or of different shape or other characteristic in the computer representation, even if in actuality they are the same or only slightly different for ease of use, or cost reasons, say.
  • the target on the block can optically indicate to the computer what kind of block it is.
  • each individual block object could be a different machine, and could even be changed in software as to which machine was which. is.
  • some blocks could for example, in the computer represent machine tools, others robots, and so on.
  • FIG. 15 illustrates a sword play video game of the invention using one or more life-size projection screens. While large screens aren't needed to use the invention, the physical nature of the invention's input ability lends itself to same.
  • player 1501 holds sword 1502 having 3 targets 1503 - 1505 whose position in space is imaged by stereo camera photogrammetry system (single or dual camera) 1510 , and retroreflective IR illumination source 1511 , so that the position and orientation of the sword can be computed by computer 1520 as discussed above.
  • the display, produced by overhead projector 1525 connected to computer 1520 is a life size or near life size HDTV projection TV image 1500 directly in front of the player 1501 and immersing him in the game, more so than in conventional video games, as the image size is what one would expect in real life.
  • the whole game indeed may actually be on a human scale.
  • the enemies or other interacting forces depicted on the screen can in fact be human size and can move around by virtue of the computer program control of the projection screen just the same as they would have in life. This however makes it important, and is indeed part of the fun of using the invention, to employ human size weapons that one might use including but not limited to one's own personally owned weapons—targeted according to the invention if desired for ease of determining their location.
  • the opponents actions can be modeled in the computer to respond to those of the player detected with the invention.
  • a two or more player game can also be created where each player is represented by a computer modeled image on the screen, and the two screen representations fight or otherwise interact based on data generated concerning each players positions or objects positions controlled or manuvered by the players.
  • the same stereo camera system can if desired, be used to see both players if in the same room.
  • the player 1549 may use a toy pistol 1550 which is also viewed by Stereo camera system, 1510 in a similar manner to effect a “shootout at the OK corral” game of the invention.
  • the players hand 1575 or holster 1520 and pistol 1585 may be targeted with one or more targets as described in other embodiments and viewed by stereo camera (single or dual) system of the invention, as in the sword game above.
  • the player draws his gun when a bad guy draws his and shoots.
  • His pointing (ie shooting)accuracy and timing may be monitored by the target-based system of the invention that can determine the time at which his gun was aimed, and where it was aimed(desirably using at least one or more targets or other features of his gun to determine pointing direction). This is compared in the computer 1520 with the time taken by the bad guy drawing, to determine who was the winner—if desired, both in terms of time, and accuracy of aiming of the player.
  • An added feature is the ability of a TV camera of the invention to take (using one of the cameras used for datum detection, or a separate camera such as 1580 , a normal 2D color photograph or TV image 1588 of a player or other person 1586 , and via computer software, superpose it on or other wise use it to create via computer techniques, the image of one of the bad (or good) guys in the game! This adds a personal touch to the action.
  • FIG. 15 B illustrates on pistol 1585 a target indicator flag 1584 which is activated to signal the TV camera or cameras 1510 observing the pistol orientation and position.
  • the flag with the target pops up indicating this event.
  • a LED can be energized to light (run by a battery in the toy) instead of the flag raising.
  • a noise such as a “pop” can be made by the gun, which noise is picked up by a microphone 1521 whose signal is processed using taught sounds and/or signature processing methods known in the art to recognize the sound and used to signal the computer 1520 to cause the projected TV image 1500 to depict desired action imagery.
  • a bad guy, or enemy depicted on the screen can shoot back at the player, and if so, the player needs to duck the bullet. If the player doesn't duck (as sensed by the tv camera computer input device of the invention,) then he is considered hit.
  • the ducking reflex of the player to the gun being visibly and audibly fired on the screen is monitored by the camera that can look at datums on, or the natural features of, the player, in the latter case for example, the center of mass of the head or the whole upper torso moving from side to side to duck the bullet or downward.
  • the computer tv camera combination can simply look at the position, or changes in the position of the target datum's on the player.
  • the center of mass in one embodiment can be determined by simply determining the centroid of pixels representing the head in the gray level tv image of the player.
  • both the sword and the pistol are typically pointed at the screen, and since both objects are extensive in the direction of pointing, the logical camera location is preferably to the side or overhead—rather than on top or side of the screen, say.
  • line targets aligned with the object axis, such as 1586 on pistol 1585 are useful for accurately determining with a stereo camera pair the pointing direction of the object.
  • features or other data of the sword and pistol described, or the user, or other objects used in the game may be viewed with different cameras 1590 and 1591 (also processed by computer 1520 ) in order that at any instant in the game, sufficient data on the sword (or pistol, or whatever) position and/or orientation can be determined regardless of any obscuration of the targets or other effects which would render targets invisible in a particular camera view.
  • the computer program controlling the sensors of the game or other activity chooses the best views, using the targets available.
  • the primary mode of operation of the system could alternatively be to combine data from two cameras at all times. Often the location of choice is to the side or overhead, since most games are played more or less facing the screen with objects that extend in the direction to the screen (and often as result are pointed at the screen). For many sports however, camera location looking outward from the screen is desired due to the fact that datums maybe on the person or an object. In some cases cameras may be required in all 3 locations to assure an adequate feed of position or orientation data to computer 1520 .
  • the invention benefits from having more than 3 targets on an object in a field, to provide a degree of redundancy.
  • the targets should desirably be individually identifiable either due to their color, shape or other characteristic, or because of their location with respect to easily identifiable features of the sword object.
  • single targets of known shape and size such as triangles which allow one to use all the pixel points along an edge to calculate the line—thus providing redundancy if some of the line is obscured.
  • a single target on a hat can be simply detected ad determined in its 3D location by the two or more camera stereo imaging and analysis system of the invention
  • natural features of the use could alternatively, or in addition be used, such as determining from the gray level image detected by the tv camera of FIG. 1 say, the users head location.
  • the target can be on the body, and the head can be found knowing the target location—to simplify identification of the head in an overall image of a complex room scene, say.
  • such coordinate data can also be used to control the screen display, to allow stored images to be directed in such a way as to best suit a use in a given part of a room, for example using directional 3D projection techniques. If user head angle as well is determined, then the viewpoint of the display can be further controlled therefrom.
  • Programs used with the invention can be downloaded from a variety of sources. For example:
  • a partner in an activity could not only exchange game software for example, but the requisite drivers to allow ones local game to be commanded by data from the partners activity over the communication link.
  • One of the interesting aspects of the invention is to obtain instructions for the computer controlling the game (or other activity being engaged in) using the input of the invention, from remote sources such as over the Internet. For example, let us say that General Motors wanted to sponsor the car game of the day played with a toy car that one might purchase at the local Toys-R-Us store and with its basic dashboard and steering wheel brake panel accelerator, gear lever, etc. All devices that can easily be targeted inputted via the video camera of the invention of FIG. 4.
  • the stereo photogrammetric activity at the point of actual determination can be used directly to feed data to the communications media. Orientation and position of objects or multiple points on objects or the like can be transmitted with very little bandwidth, much less difficult than having to transmit the complete image. In fact, one can transmit the image using the same cameras and hen use the computer at the other end to change the image in response to the data transferred, at least over some degree of change. This is particularly true if one transmits a prior set of images that corresponds to different positions. These images can be used at any time in the future to play the game by simply calling them up form the transmitted datum's.
  • FIG. 8 Similar to the playing function of FIGS. 5, 15 etc, there is also a teaching function, as was discussed relative to medical simulations in FIG. 8.
  • the invention is for example, also useful in the teaching of ballet, karate, dance and the like.
  • the positions and orientation of portions of the ballerina or her clothes can be determined busing the invention, and compared to computer modeled activity of famous ballerinas for example.
  • a motion of the student can be used to call TV images from memory bank which were taken of famous ballerinas doing the same move—r of her instructor.
  • her instructor may be in another country. This allows at least reconstructed motion at the other end using a very small amount of transmitted data, much the same as we would reconstruct the motion of a player in the game.
  • the invention thus can use to advantage 3D motion done at very low cost in the home or in a small time ballet studio but nonetheless linked through CD ROM, the Internet or other media to the world's greatest teachers or performers. What holds true for ballet generally would also hold true for any of the sports, artistic or otherwise that are taught in such a manner. These can particularly include figure skating, golf or other sports that have to do with the moves of the person themselves.
  • FIG. 5 we've illustrated the idea of two children playing an airplane game. In this case, they are playing with respect to themselves. But not necessarily directly, but rather indirectly by viewing the results of their actions on the screen, and it is on the screen that the actual event of their interaction takes place. In addition it should be noted that a single player can hold an airplane in each hand and stage the dogfight himself.
  • the teaching session can be stored locally or transmitted over a computer link such as the Internet.
  • Karate or dance for example can be taught over the Internet.
  • Targets if required, can be attached to arms, hands, legs, or other parts of the body.
  • the user's body part paths can be tracked in space in time by one or more camera systems.
  • the video can be analyzed in real-time or can be recorded and later analyzed.
  • the TV image data can ultimately even be converted to “Quant” data representing sequences of motion detected by the camera system for compact data transmission and storage.
  • the specific path data could be recognized as a specific karate thrust, say. This motion together with its beginning and end locations and orientation may be adequate for an automatic system.
  • a two-way Internet connection would allow the instructors move to be compared with that of the student. By reducing the data to Quant data the instructors and students size differences could be factored out.
  • the invention can be used to determine position and orientation of everyday objects for training and other purposes.
  • position and orientation of a knife and fork in ones hands can be detected and displayed or recorded, if target datum's are visible to the camera system, either natural (e.g. a fork tip end) or artificial, such a retro-reflective dot stuck on.
  • target datum's are visible to the camera system, either natural (e.g. a fork tip end) or artificial, such a retro-reflective dot stuck on.
  • any tools such as wrenches, hammers, etc. indeed any apparatus that can be held in the hands (or otherwise).
  • the position too of the apparatus held with respect to the hands or other portions of the body for other bodies maybe determined as well.
  • Scalpels, drills, and the like may all be targeted or other wise provided with natural features such as holes, slots, and edges which can work with the invention.
  • FIG. 16 illustrates an embodiment of the invention suitable for use on airplanes and other tight quarters.
  • a computer having an LCD screen 1610 , which can attached if desired to the back of seat ahead 1605 (or to any other convenient member), has on either side of the screen, near the top, two video cameras 1615 and 1616 of the invention, which view workspace on and above the tray table folding down from the seat ahead.
  • the user communicates with the computer using a microphone (for best reception a headset type not shown, connected to the computer) which converts voice to letters and words using known voice recognition techniques. For movement of words, paragraphs, and portions of documents, including spread sheet cells and the like, the user may use the invention.
  • Solid icons can be placed on the tray table and detected, in this case each having a small led or leds and battery. These can be moved on the table to connote meaning to the computer, such as the postion of spread sheet cells or work blocks in pert chart, and the like
  • the screen could be larger than otherwise used for laptop computers, since it is all out of the way on the back of the seat (or at a regular desk, can stand up with folding legs for example). The whole computer can be built into the back of the device (and is thus not shown here for clarity).
  • a storage space for targeted objects used with the invention can be build into the screen/computer combination or carried in a carrying case. Attachments such as targets for attachment to fingers can also be carried.
  • FIG. 16 has illustrated an embodiment of the invention having a mouse and/or keyboard of the conventional variety combined with a target of the invention on the user to give an enhanced capability even to a conventional word processing or spreadsheet, or other program.
  • Voice recognition can clearly be used to replace the typing, and gesture sensing according to the invention including specialized gestures or movements such as shown in FIG. 5 can be used to improve recognition of voice inputs by the computer system.
  • touch screen indicator aspect to point directly at objects on the screen.
  • a user such as in FIG. 12 may seated in front of a large high definition display screen on a wall or tilted 45 degrees as at a writing desk.
  • the user can either touch (or near touch) the screen as in FIG. 12 or he can point at the screen with his finger targeted with retro-reflective scotch-lite glass bead target and the pointing direction calculated using the 3 target set on top of his wrist as in FIG. 1 b.
  • the screens' datum's are known, for example four retro-reflective plastic reflector points at the corners 1270 - 1273 as shown.
  • projected targets on the screen can also be used to establish screen locations—even individually with respect to certain information blocks if desired.
  • a Stereo camera pair senses the positions of wrist and finger, and directs the computer and TV projector (not shown) to follow the wishes of the user at the point in question.
  • the user may use his other hand or head if suitably targeted or having suitable natural features, to indicate commands to the camera computer system as well.
  • the display can be in 3D using suitable LCD or other glasses to provide the stereo effect. This allows one to pull the values out of the excel chart and make them extendable in another dimension. One can pull them out, so to speak by using for example as shown in FIG. 6, using two targeted fingers (e.g. targeted thumb and targeted finger and grab or pinch and pull the object in the cell. In a word processor the word on the page can be so grabbed.
  • suitable LCD or other glasses to provide the stereo effect.
  • transparent targeted blocks may be moved over the top of transparent rear projection screen.
  • the blocks can also extend in height above the screen by a variable amount.
  • Data can be inputted by the computer screen, but also by varying the block height.
  • the height is then encoded into the screen projection to change the color or another parameter.
  • pointing direction can also be used.
  • This pointing method can also be used to point at anything—not just screens. It is especially useful with voice commands to tell the pointed item to do something. It is also of use to cue the projection system of the TV image to light up the pointed area or otherwise indicate where pointing is taking place.
  • the invention can operate in reverse from a normal presentation computer—that is the person standing giving the presentation can point at the screen where the information is displayed, and what he pointed at, grasped, or what ever recorded by the cameras of the invention into the computer.
  • a laser pointer can be targeted and used for the purpose.
  • FIG. 17A This embodiment illustrates the versatility of the invention, for both computer input, and music.
  • a two camera stereo pair 1701 and 1702 connected to computer 1704 such as mentioned above for use in games, toys and the like can also be used to actually read key locations on keyboards, such as those of a piano or typewriter.
  • keyboards such as those of a piano or typewriter.
  • letters or in the piano case, musical note keys such as 1708 with retro target 1720 on their rear, beneath the keyboard, are observed with the camera set 1701 .
  • a Z axis movement gives the key hit (and how much, if desired—assuming elastic or other deformation in response to input function by player finger 1710 ), while the x (and y if a black key, whose target is displaced for example) location of the key tells which letter or note it is.
  • Speakers 1703 and 1705 provide the music from a MIDI computer digital to speaker audio translation.
  • the two cameras are in this instance composed of 2048 element Reticon line arrays operating at 10,000 readings per second. Specialized DSP processors to determine the stereo match and coordinates may be required at these speeds, since many keys can be pressed at once.
  • the piano players finger tips as disclosed in previous embodiments can be imaged from above the keyboard (preferably with retroreflective targets for highest speed and resolution) to create knowledge of his finger positions. This when coupled with knowledge of the keyboard data base allows one to determine what key is being struck due to the z axis motion of the finger.
  • FIG. 18 A dummy violin surrogate such as 1820 in FIG. 18 can be provided which is played on bowstrings real or dummies by a bow 1825 also real or dummies
  • the position of the bow, vis a vis the dummy violin body 1820 proper, and the position of the fingers 1840 (which may be targeted) gives the answer as to what music to synthesize from the computer.
  • the easiest way to operate is to use retro-reflecting datums such as dot or line targets on all of the bow, violin, and fingers, such as 1830 , 1831 , 1832 , and 1833 , viewed with stereo camera system 1850 connected to computer 1858 and one or more loudspeakers 1875 .
  • Frequency response is generally enough at 30 frames per second typical of standard television cameras to register the information desired, and interpolation can be used if necessary between registered positions (of say the bow). This may not be enough to provide full timber of the instrument however.
  • the input from the targeted human, or musical instrument part may cause via the computer the output be more than a note, for example a synthesized sequence of notes or chords—in this manner one would play the instrument only in a simulated sense—with the computer synthesized music filling in the blanks so to speak.
  • a note for example a synthesized sequence of notes or chords—in this manner one would play the instrument only in a simulated sense—with the computer synthesized music filling in the blanks so to speak.
  • a display such as 1860 may be provided of the player playing the simulated instrument, may use the data of positions of his hands in a few positions, and interpret between them, or call from memory more elaborate moves either taught or from a library of moves, so that the display looks realistic for the music played (which may be also synthesized) as noted above.
  • the display fill in is especially easy if a computer model of the player is used, which can be varied with the position data determined with the invention.
  • FIG. 19 illustrates a method for entering data into a CAD system used to sculpt a car body surface, in which a physical toy car surrogate for a real car model, 1910 , representing for example the car to be designed or sculpted, is held in a designers left hand 1902 , and sculpting tool 1905 in his right hand 1906 . Both car and tool are sensed in up to 6 degrees of freedom each by the stereo camera system of the invention, represented by 1912 and 1913 ,(connected to a computer not shown used to process the camera data, enter data into the design program, and drive the display 1915 ).
  • the objects are equipped with special target datums in this example, such ass 1920 - 1922 on car 1910 , and 1925 - 1927 on sculpting tool 1905 .
  • a display of a car to be designed on the screen is modified by the action of the computer program responding to positions detected by the camera system of the sculpting tool 1905 with respect to the toy car, as the tool is rubbed over the surface of the toy car surrogate.
  • each tool has a code such as 1960 and 1961 that also indicates what tool it is, and allows the computer to call up from memory, the material modification effected by the tool.
  • This code can be in addition to the target datum's, or one or more of the datum's can include the code.
  • FIG. 20 illustrates an embodiment of the invention used for patient monitoring in the home, or hospital.
  • a group of retro-reflective targets such as 2021 , 2030 , and 2040 are placed on the body of the person 2045 and are located in space relative to the camera system, (and if desired relative to the bed 2035 which also may include target 2036 to aid its location), and dynamically monitored and tracked by stereo camera system 2020 composed of a pair of VLSI Vision 1000 ⁇ 1000 CMOS detector arrays and suitable lenses.
  • target 2021 on chest cavity 2022 indicates whether the patient is breathing, as it goes up and down. This can be seen by comparison of target location in sequential images, or even just target blur (in the direction of chest expansion) if the camera is set to integrate over a few seconds of patient activity.
  • Target 2030 on the arm is monitored to indicate whether the patient is outside a perimeter desired, such as the bed 2035 . If so, computer, 2080 is programmed to sound an alarm 2015 or provide another function, for example alerting a remote caregiver who can come in to assist. Microphone, such as 2016 may also be interfaced to the computer to provide a listening function, and to signal when help his needed.
  • a simple embodiment of the invention may be used to monitor and amuse toddlers and preschool age children.
  • a Compaq 166 Mhz pentium computer 8 with Compaq 2D color TV camera 10 , was used, together with an Intel frame grabber and processor card to grab and store the images for processing in the Pentium computer.
  • the toddler is seated in a high chair or walking around at a distance for example of several feet from the camera mounted on top of the TV monitor.
  • an object such as a doll image or a the modeled computer graphics image of clown, let us say could move up and down or side to side on the screen. (in the simple version of FIG. 1, only x and y motions of the toddler body parts or doll features are obtainable.)
  • the image of the clown can also be taken or imported from other sources, for example a picture of the child's father.
  • single or dual camera stereo of the invention can be used to increase the complexity with which the child can interact to 3, 4, 5, or 6 degrees of freedom with increasing sophistication in the game or learning experience.
  • His movements indicate as well what he is doing and can be used as another monitoring means. For example, if he is running or moving at too great a velocity, the computer can determine this by a rate of change of position of coordinates, or by observing certain sequences of motion indicative of the motion desired to monitor. Similarly, and like the patient example above, if the coordinates monitored exceed a preset allowable area (eg a play space), a signal can be indicated by the computer.
  • a preset allowable area eg a play space
  • the device also useful for amusement and learning purposes.
  • the toddler's wrists or other features can be targeted, and when he claps, a clapping sound generated by the computer in proportion, or by different characteristics or the like.
  • the computers can be programmed using known algorithms and hardware talk to him, and tell him to do things, and monitor what he did, making a game out of it if desired. It also can aid learning, giving him visual feedback and audio and verbal appreciation of a good answer, score and the like.
  • the invention can be used to aid learning and mental development in very young children and infants by relating gestures of hands and other bodily portions or objects such as rattles held by the child, to music and/or visual experiences.
  • FIG. 21 wherein an LCD tv display 2101 is attached to the end of crib 2102 , in which baby 2105 is laying, placed so baby can see it.
  • This display could be used to display for example a picture of the child's parents or pets in the home, or other desired imagery which can respond both visually and audibly to inputs from the baby sensed with the apparatus of FIG. 1, or other apparatus of the invention. These are then used to help illustrate the learning functions.
  • the camera system, such as stereo pair, 2110 and 2115 are located as shown on the edges of the LCD screen or elsewhere as desired, and both are operated by the computer 2135 . Notice that the design with the cameras integrated can be that of the lap top FIG. 22 application as well
  • the baby's hands, fingers, head, feet or any other desired portion can be targeted, on his clothes or directly attached. Or natural features can be used if only simple actions such as moving a hand or head are needed (all possible today with low cost computer equipment suitable for the home). And importantly, the baby can easily hold a targeted rattle such as as 2130 having target datums 2152 and 2153 at the ends (whose sound may be generated from the computer speaker 2140 instead, and be programmably changed from time to time, or react to his input) and he may easily touch as today a targeted mobile in the crib as well, or any other object such as a stuffed animal, block or what ever.
  • a targeted rattle such as 2130 having target datums 2152 and 2153 at the ends (whose sound may be generated from the computer speaker 2140 instead, and be programmably changed from time to time, or react to his input) and he may easily touch as today a targeted mobile in the crib as well, or any other object such as a stuffed animal, block or what ever.
  • the invention has allowed the baby to interact with the computer for the first time in a meaningful way that will improve his learning ability, and IQ in future years. It is felt by the inventors that this is a major advance.
  • the child can also move his hands or head and similar things can take place. For example, he can create music, or react to classical music (a known learning improvement medium today) perhaps by keeping time, or to cue various visual cues such as artistic scenes or family and home scenes that he can relate to certain musical scores and the like.
  • classical music a known learning improvement medium today
  • visual cues such as artistic scenes or family and home scenes that he can relate to certain musical scores and the like.
  • the child can also use the computer to create art, by moving his hand, or the rattle or other object, and with some simple program, may be able to call up stored images as well.
  • Another embodiment could have the child responding to stored images or sounds, for example from a DVD Disc read by the computer 2135 , and sort of vote on the ones he liked, by responding with movement over a certain threshold level, say a wiggle of his rattle. These images could later be played back in more detail if desired. And his inputs could be monitored and used by professional diagnosis to determine further programs to help the child, or to diagnose if certain normal patterns were missing—thus perhaps identifying problems in children at a very early age to allow treatment to begin sooner, or before it was too late.
  • data directly taken from the child can be transmitted to a central learning center for assistance, diagnosis, or directly for interactivty of any desired type.
  • an added benefit of the invention is that it can be used to aid mute and deaf persons who must speak with their hands.
  • the interpretation of sign language can be done by analyzing dynamic hand and finger position and converting via a learning sequence or other wise into computer verbage or speech
  • the invention aids therapy in general, by relating motion of a portion of the body to a desired stimulus. (visual auditory or physical touch) Indeed the same holds for exercise regimes of healthy persons.
  • stroke victims and other patients may need the action of the computer imagery and audio in order to trigger responses in their activity to re train them—much like the child example above.
  • One of the advantages of this invention is that all sorts of objects can be registered in their function on the same camera system, operating both in single, dual or other stereo capabilities and all at low cost. This particular issue that the people, the objects, the whole stationary platform such as desk, floors, walls, al can be registered with the same generic principles, is a huge benefit of the application.
  • This invention allows one to directly sense these positions and movements at low cost. What this may allow one to do then is utilize the knowledge of such gestures to act as an aid to speech recognition. This is particularly useful since many idiomatic forms of speech are not able to be easily recognized but the gestures around them may yield clues to their vocal solution.
  • gestures For example, it is comprehended by the invention to encode the movements of a gesture and compare that with either a well known library of hand and other gestures taken from the populace as a whole or taught using the gestures of the person in question.
  • the person would make the gesture in front of the camera, the movements and/or positions would be recorded, and he would record in memory, using voice or keyboard or both, what the gesture meant—which could be used in future gesture recognition, or voice recognition with detoxied gesture.
  • a look up table can be provided in the computer software, where one can look up in a matrix of gestures, including the confidence level therein, including the meaning, and then compare that to add to any sort of spoken word meaning that needs to be addressed.
  • One of the advantages of the invention is that there is a vast number of artifacts that can be used to aid the invention to reliably and rapidly acquire and determine the coordinates of the object datums at little or no additional cost relative to the camera/computer system.
  • retro-reflective targets on fingers, belt buckles, and many forms of jewelry, clothing and accessories (eg buttons) and the like.
  • Many of these are decorative and objects such as this can easily be designed and constructed so that the target points represented are easily visible by a TV camera, while at the same time being interpreted by human as being a normal part of the object and therefore unobtrusive.
  • Some targets indeed can be invisible and viewed with lighting that is specially provided such as ultraviolet or infrared.
  • An object via the medium of software plus display screen and/or sound may also take on a life as a surrogate for something else.
  • a simple toy car can be held in the hand to represent a car being designed on the screen.
  • the toy car could have been a rectangular block of wood. Either would feel more or less like the car on the screen would have felt, had it been the same size at least, but neither is the object being designed in the computer and displayed on the screen.
  • the invention can sense dynamically, and the computer connected to the sensor can act on the data intelligently.
  • the sensing of datum's on objects, targeted or not can be done in a manner that optimizes function of the system.
  • the word target or datum essentially means a feature on the object or person for the purpose of the invention.
  • these can either natural features of the object such as fingernails or fingertips, hands or so on or can be what is often preferable, specialized datums put on especially to assist the function of the invention.
  • These can include typically contrasting type datum's due to high brightness retro-reflection or color variation with respect to its surroundings, and often further distinguished or alternatively distinguished by some sort of pattern or shape.
  • Examples of patterns can include the patterns on cloth such as stripes, checks, and so on.
  • the pointing direction of a person's arm or sleeve having a striped cloth pointing along the length of the sleeve would be indicated by determining the 3D pointing direction of the stripes. This can easily be done using the edge detection algorithms with a binocular stereo cameras here disclosed.
  • a useful shape can be a square, a triangle, or something not typically seen in the room, desktop, or other area that one would normally operate such that they stand out. Or even if a common shape, the combintion of the shape with a specific color or brightness or both, often allows recognition
  • Another point to stress concerning the invention is the fact of the performance of multiple functions. This allows it to be shared amongst a large number of different users and different uses for the same user and with a commonality as mentioned above of the teaching of it's function, the familiarity with it's use, and so forth.
  • a key is the natural aspect of the invention, that it enables, at low cost and high reliability the use of learned natural movements of persons—for work, for play, for therapy, for exercise—and a variety of other work and safety uses here disclosed, and similar to those disclosed.
  • FIGS. 1 to 3 have illustrated several basic principles of optically aided computer inputs using single or dual/multicamera (stereo) photogrammetry. Illustrated are new forms of inputs to effect both the design and assembly of objects.
  • Computer instructions can come form all conventional sources, such as keyboards mice and voice recognition systems, but also from gestures and movement sequences for example using the TV camera sensing aspect of the invention.
  • a targeted paint brush can instantly provide a real feeling way to use painting type programs. While painting itself is a 2D activity on the paper, the 3D sensing aspect of the invention is used to determine when the brush is applied to the paper, or lifted off, and in the case of pressing the brush down to spread the rush, the z axis movement into the plane of the paper determines how much spreading takes place (paper plane defined as xy).
  • the 3D aspect is also used to allow the coordinate system to be transformed between the xyz as so defined, and the angulation of the easel with respect to the camera system wherever it is placed typically overhead, in front or to the side somewhere
  • This freedom of placement is a major advantage of the invention, as is the freedom of choice of where targets are located on objects, thanks to the two camera stereo system in particulars ability to solve all necessary photogrammetric equations.
  • angle of the brush or a pen held in hand with respect to the z axis can also be used to instruct the computer, as can any motion pattern of the brush either o the paper or waved in the air.
  • the computer can be so instructed as to Parametric shape parameters such as % of circle and square.
  • the height in z may be used to control an object width for example.
  • CAD computer aided design system
  • the objects can be anything on which 3M Scotch light 7615 type retro-reflective material can be placed, or other reflective or high contrast material incorporated in to the surface of an object. You can stick them on fingers, toys or whatever, and can be easily removed if desired. With two (or more) camera stereo systems, no particular way of putting them on is needed, one can solve photogrammetrically for any non co-linear set of three to determine object position and orientation, and any one target can be found in x y and z.
  • alias object The physical nature of the alias object, is a very important aspect of the invention. It feels like a real object, even though it's a simple targeted block, one feels that it is a car, when you view the car representation on the screen that the block position commands. Feel object, look at screen, this is totally different than controlling an object on a screen with a mouse.
  • a child can affix special targets (using velcro, tape, pins, or other means) on his favorite stuffed toys and then he can have them play with each other, or even a third. Or two children can play, each with their own doll or stuffed animal. But on screen, they convert the play into any kind of animal, including scenery (e.g. a barnyard).
  • the animals can have voice added in some way, either by the computer, or by prerecorded sounds, or in real time via microphones. Via the internet, new voice inputs or other game inputs can be downloaded at will from assisting sites. And programs, and voice, and tv imagry can be exchanged between users.
  • Computer imagery of the actual animal can be taken using the same TV camera, recorded, and the 3D position determined during play, and the image transformed into a 3D image, rotated or whatever.
  • Each person can use a real, or alias object (eg a broomstick piece for a hammer) targeted as he chooses, in order to use the audio and visual capabilities of computer generated activity of the invention. All are more natural to him or her, than a mouse! In each case too, the object to be worked on can also be sensed with the invention
  • the computer program using the sensor input, can faithfully utilize the input, or it can extrapolate from it. For example rather than play middle C, it can play a whole chord, or knowing the intended piece, play several of the notes in that piece that follow. Similarly, one can start a simulated incision with a scalpel, and actually continue it a distance along the same path the student doctor started.
  • the cocking of a hammer on a toy pistol can act as a cue in many cases.
  • a microphone connected to the computer can pick this up and analyze the signature and determine that a gun may be fired. This can cause the vision analysis program looking at the tv image to look for the pistol, and to anticipate the shot.
  • the sound of the gun rather than a visual indicator, can alternatively be used to cue the displayed image data as well.
  • Two microphones if used can be used to triangulate on the sound source, and even tell the tv camera where to look. In many cases sound and physical action are related. Sounds for example can be used to pick up a filing noise, to indicate that a alias object was actually being worked by a tool.
  • the TV camera(s) can monitor the position and orientation of each, but the actual contact registered by sound. Or contact could be just the physical proximity of one image to another—however the sound is created by the actual physical contact which is more accurate, and more real to the user.
  • the invention can look for many signatures of object position and movement—including complex sequences. This has been described in another context relative to FIG. 7 for recognizing human gestures.
  • the recognition algorithim can be taught before hand using the position or movement in question as an input, or it may be preprogrammed, to recognize data presented to it from a library, often specific to game/activity of interest.
  • Such recognition can also be used to Anticipate an action, For example, if a bow string or hand is moved directly back from a bow, recognition is that one is Drawing a bow, and that an arrow may be ready to be shot. The computer can then command the screen display or sound generation speakers to react (eyes, head move, person on screen runs away, etc) Similarly, the actual action of releasing the bow can be sensed, and the program react to the move
  • a mouse point could be movable. That is, the target could be wiggled by the finger holding the mouse, to signal a move or other action to the computer. This would then allow you to put inputs to the computer into the device without adding any electrical wires or anything.
  • Transducers can also be used as single point inputs, for example of pressures or temperatures or anything that would make a target move, for example in the later case the target being on the end of a bimetal strip which changes position with temperature
  • Another application is to register the relative position of one object to another.
  • the mouse is basically an odometer. It can't really give any positional data relative to something but can only give the distance moved in two directions which is then converted from some home location onto the screen.
  • the invention however is absolute, as the camera is as well. It can provide data on any point to any other point or even to groups of points—on objects, humans, or both. Even using the simplest form of the invention, one can put a target on a human and track it or find it's position in space.
  • X and Y only (FIG. 1 a )
  • the LED in its simplest form can be powered by something that itself is powered. This means an LED on top of the mouse for example. On the other hand, typically the LED would be on an object where you would not like a power cable and this would then mean battery operated.
  • the basic technical embodiment of the invention illustrated in FIG. 1 uses a single TV camera for viewing a group of 3 or more targets(or special targets able to give up to a 6 degree of freedom solution), or a set of at least two TV cameras for determining 3D location of a number of targets individually, and in combination to provide object orientation.
  • These cameras are today adapted to the computer by use of the USB port or better still, fire wire (IEE 1394).
  • the cameras may be employed to sense natural features of objects as targets, but today for cost and speed reasons, are best used with high contrast targets such as LED sources on the object, or more generally with retro-reflective targets. In the latter case lighting as with IR LED's is provided near the optical axis of each camera used.
  • Laser pointers are also very useful for creating one or more high contrast indications, simultaneously, or in sequence on object surfaces that can be sensed by the stereo cameras (typically two or more).
  • an object can be digitized using the same camera system used for target related inputs. This is an important cost justification of total system capability.
  • Invention combined with voice input makes user much more portable—For example can walk around room and indicate to the computer both action and words
  • the target if a plain piece of glass bead retroreflector, cannot be seen typically beyond angles plus or minus 45 degrees from the normal of the reflector aligned with the camera viewing axis. (indeed some material drops out at 30 degrees)
  • targets pointing in different directions may be desirable.
  • each pointed in a differnet direction say rotationally about the head to toe axis of a dancer say, one can use in some cases multi-directional targets, typically large balls, beads and faceted objects such as diamonds
  • the target 1650 could be attached to gyroscope 1655 that in turn is attached to a base 1660 by a ball joint 1665 or other free floating mechanical link.
  • the target could be initially tilted directly toward the cameras allowing the cameras to view the target more precisely.
  • the base plate is then attached to the object to be tracked.
  • the position of the attachment can be calculated once the target location and orientation are established. Since the gyroscope would hold the target orientation toward the cameras as the dance turns, this method extends the range of motion allowed by the dancer or other users.
  • an object may be physically thrown, kicked, slung, shot, or otherwise directed at the image represented on screen (say at an enemies or some object, or in the case of a baseball game, at a batters strike zone for example), and the thrown object tracked in space by the stereo camera of the invention and/or determined in its trajectory or other function by information relating to the impact on the screen (the latter described in a referenced co-pending application). Damage to the screen is minimized by using front projection onto a wall.
  • FIG. 22 illustrates the use of a PSD (position sensitive photodiode)based image sensor as an alternative to, or in conjunction with, a solid state TV camera.
  • PSD position sensitive photodiode
  • Two versions are shown, A single point device, with retro-reflective illumination, or with a battery powered LED source is described, and a multi-point device with LED sources, can also be used A combination of this sensor and a TV camera is also described., as is an alternative using fiber optic sources.
  • a device using such an imaging device and a retroreflective background is presented as an alternative to specialized high reflectance datums on the human for example.
  • the PSD detector can utilize modulated sources, and demodulated PSD outputs as is well known. Detectors of this type are made for example by Sitek in Sweden and Hamamatsu in Japan. Where individual LED targets on the object are used, they may also be individually modulated at different frequencies in order to be distinguished one from the other, and from the background, and/or they may be rippled in sequence. Similarly fiber optically remoted sources may do this as well.
  • the camera 2210 is composed of a lens 2215 and a PSD detector 2220 , which provides two voltage outputs proportional to the location of an image on its face.
  • a single bright point such as retroreflective target 2230 is illuminated with a co-axial, or near coaxial light source 2235
  • a spot 2240 is formed on the PSD face, whose xy location voltage signal 2244 is digitized and entered into the control computer 2250 by known excitation and A-D converter means.
  • an LED or other active source can be used in place of the retro and its light source. In either case the background light reaching the PSD is much less than that from the target and effectively ignored.
  • PSD systems are fast, and can run at speeds such as 10,000 readings per second, far beyond a tv cameras ability to see a point. This is very desireable where high speed is needed, or where high background noise rejection is required, such as in bright light (eg in a car on a sunny day).
  • a TV camera and a PSD camera as above can be used in concert, where desired..
  • a PSD chip such as 2260 can be built into a TV camera, 2265 having a lens 2270 and a CCD array chip 2271 , using a beam splitter 2275 which allows in this case, both to view the same field of view.
  • This allows one, for example, to use the retroreflector illumination such as 2235 for the psd detected target, and the TV camera to obtain normal scene images, or to determine other target presence and location—for example those near the more rapidly and easily detected PSD sensed target (but knowing where it is, via its output signal related to the output scan of the TV camera).
  • IR infra-red
  • the LED or other retroreflection specific light source can light up the whole object, but other effects such as saturation don't concern the TV image as they can if strong retro signals result with tv cameras.
  • a feature of such a combination allows the PSD sensor system for example to find one target, and use the tv to find the rest made easier once the first one is identified, since the others can be specified apriori to be within a given search area or path from the first target.
  • an inverse type system can be made, where the background surface (eg on a desk top) appears bright, and the target is black. This can be done with retroreflector material or even white paper on a desk top for example.
  • the target object could be ones finger which would cover up the retro and the psd give a rough output as to its x and y position.
  • 8 parallel PSD detectors 2280 giving x outputs to an 8 channel common PC computer
  • A-D data acquisition card 2282 can provide finger 2285 location in x and y (the latter only to a level of 1 part in 8), and pointing angle of the finger (roll in the xy plane). This is much faster than a TV camera for this purpose. That is the finger extended to detector 3 , and the top end was at VLEFT while the bottom one on detector 2 was a VRIGHT.
  • Previous copending applications illustrate a fiber optic alternative in which light enters the fibers at one point, and is dispersed to a single fiber or a group traveling to the fiber end, which acts then as a target, and can be provided on an object (even during molding or casting thereof. This can be less obtrusive than individual LED's for example.
  • co-target is a target put on an object for the purpose of telling a computer based camera obtaining its image, where to look for other targets in the image. This can be useful, as can a special target which is placed on the object in such a way as to indicate the objects orientation and to identify the object itself if desired, just by looking at the target (which is known relative to the data base of the object.). See also U.S. Pat. No. 5,767,525
  • FIG. 23 illustrates inputs to instrumentation and control systems, for example those typically encountered in car dashboards to provide added functionality and to provide aids to drivers, including the handicapped
  • Illustrated is an embodiment providing input to automotive control systems such as usually associated with car dashboard instrumentation to provide added functionality and to provide aids to drivers, including the handicapped.
  • the car is real, as opposed to the toy illustration of FIG. 4 in which the dash is a toy, or even a make-believe dash, and the car is simulated in its actions via computer imagery and sounds.
  • driver 2301 holds gear shift lever 2302 , in the usual manner.
  • Target datum's 2305 - 2308 are on his thumb and fingers, (or alternatively on a ring, or other jewelry, for example) or his wrist, and are viewed by miniature TV camera stereo pair 2320 and 2321 in the dash nearby the area of the gear lever.
  • Light sources as appropriate are provided with the cameras, particularly of use are IR LED's 2323 and 2326 near each camera respectively.
  • Computer 2340 reads the output of each TV camera, and computes the position and relative position of the targets either respect to the camera pair, or each other, or to gear lever 2302 (which itself may be targeted if desired, for example with target 2310 ), or to some other reference. Or the computer may simply look for motion of any object (eg a finger) or target on an object (eg a ring) above some base level of allowable motion, in the event that the user wished to signal an action just by moving his finger say (regardless of its position, or with the condition that it be within a certain window of positions say, such as between 1 and 3 O clock on the steering wheel.). Movement can be detected by comparing successive frames, or by blurred images for example.
  • object eg a finger
  • target on an object eg a ring
  • some base level of allowable motion in the event that the user wished to signal an action just by moving his finger say (regardless of its position, or with the condition that it be within a certain window of positions say, such as between
  • the driver may with this embodiment, signal a large number of different actions to the computer, just by moving his fingers while holding the gear lever, or as is even more relaxing, letting his hand rest on the gear lever, with fingers pointing down as shown which points datums on the tops of his fingers toward the dash or roof section above the windshield where cameras such as 2345 and 2346 can be located relatively easily(see also armrests in FIG. 10).
  • the steering wheel 2360 rather than or in addition to the gear lever could also be used as point of observation of the driver (these two locations are where drivers normally rest their hands, but other places such as near armrests etc. could be chosen too).
  • an advantageous alternate camera location is in the headliner, not shown, which allows viewing of the fingers or targets thereon from above.
  • the steering wheel is a natural place, where at the 10 and 2 O'Clock positions 2361 and 2362 in normal driving, one can wiggle ones thumb, or make a pinching gesture with thumb and first finger, which could be programmed to actuate any function allowed by cars control microcomputer 2350 connected to the TV camera processor 2340 (the two could be one in the same, and both likely located underdash).
  • the program could be changed by the user if desired, such that a different motion or position gave a different control function.
  • Actions chosen using finger position, or relative position, or finger motion or path could be control of heating, lighting, radio, and accessories, or for handicapped and others could even be major functions, such as throttle, brake, etc.
  • FIG. 24 illustrates a control system for use with “do it yourself” target application
  • LED light sources can be used advantageously as targets with the invention—especially where very high contrast is needed, especially achievable with modulated LED sources, and demodulated PSD based detectors.
  • an advantage of reflective targets, and retro-reflective targets in particular, as opposed to LED targets is that you can easily put them on an object at very little cost, without requiring the object to have batteries, wires or the like.
  • objects not designed for the purpose such as a young girls favorite doll can be easily equipped with small unobtrusive colored and/or retro-reflective targets (if suitable natural target features aren't available, as often the case) and this favorite toy becomes the input device to a game of doll house or the like on the screen, with suitable software support the child can have her doll playing in the White House on the screen! And audio can suit as well, for example the first lady could talk back!
  • Retroreflective material such as scotchlight 7615 is naturally gray appearing and unless brightly colored for ease of further identification, is quite unobtrusive to the user. Indeed it can be colored the color of the portion of the object on which it is provided to make it even more so. (except of course along the path from the light source illuminating same—not seen by the average user except in rare situations).
  • the datums on an object can be known apriori relative to other points on the object, and to other datums, by selling the object designed using such knowledge (or measured after the fact to obtain it) and including with it a CD ROMdisc or other computer interfacable storage medium having this data.
  • the user for example, can teach the computer system this information. This is particularly useful when the datums are applied by the user on arbitrary objects.
  • a more involved 3D digitized model can also be created with the invention, and the datums associated with it
  • a distracting portion eg a belt buckle having glints
  • standard frameworks for activity can be provided by a vendor on software discs or over the internet, which allow the user to easily construct his own activity.
  • the framework can include software for specialized datum detection included with the game kit for example.
  • the framework can have software to tailor game or other activity software to the taught in positions and movements of the game player (human, doll, or whatever).
  • a diagnostic and optimization program could look at a few examples of use during a warm-up period or even once a game, for example, got going, and then optimize various parameters to suit, such as:
  • Lighting related parameters such as LED power, LED pulse time if used, camera integration time, etc. also even varied to suit different portions of the game, and of course to suit the room, distances from the camera and so on.
  • a warning of slow response could be given if working parameters were not met, so the user could change a condition if he wished.
  • the standard program framework could assist the user in construction of the activity itself.
  • the airplane game of FIG. 5 could have a library of various display and aural options which the user could select to tailor his game as desired.
  • program elements could cross from one game type to another (eg the car dash of FIG. 4 if it were an airplane dash could use the airplane action display imagery employed in the game of FIG. 5).
  • some elements might cross over to non game activity as well
  • FIG. 24 A flow chart illustrating some of the above steps is shown in FIG. 24 Steps are as follows
  • FIG. 25 illustrates a game experience with an object represented on a deformable screen.
  • a boxing dummy such as 2515 represented as an image on the screen, that one actually hits and deforms is possible using the invention if one considers the screen to be the deformable object. In this case perhaps it is not necessary to actually encode the deformation in the screen 2520 but assume a deformation since one knows where one hit it, by determining a target or other feature position such as 2525 on the hitting object such as boxing glove 2530 , observed by camera system 2535 whose images are processed by computer 2540 to obtain glove position.
  • Display processor 2545 uses this glove position data, to modify a computer modeled 3-D data base of an opponent stored in a data base 2550 , and drive display 2560 , for example providing said display on a large rear projection tv screen 2565 .
  • the actual actions can be modeled in a computer program capable of providing a 3D rendered display for near life like representation of the result of an action. This would apply to sword fights, soccer games, and other activity described in this and related applications. For example using a targeted sword, rather than a boxing glove, one can physically slash a real life-size opponent represented by an image on a screen and, since one knows where the slash occurs on the projection tv image by virtue of the target point determination of the sword tip using the camera system of the invention, blood representation can emerge from the screen image, or a simulated head falling off or whatever.
  • Throwing and firing sports such as baseball, shooting, archery, etc.
  • Football American
  • football sportsoccer
  • hockey field hockey, lacrosse, etc. played with goalies in the goal.
  • Games are also possible such as throwing paper airplanes, where one can easily affix to ones plane, light weight scotch-lite retro-reflector targets so as to be able to track its motion using the cameras of the invention in 3 dimensions, using the computer system of the invention for the purpose of scoring the game, or to drive a screen display, or to create sounds, or what have you. Again, imagery from the FIG. 5 airplane game could be employed here as well if desired.
  • One aspect of the invention shown above illustrates a gaming situation with respect to a sword fight. This made totally realistic, but without a great deal of cost, using a high intensity projection TV which is becoming ever cheaper as of this writing.
  • the screen may be either rigid, semi-deformable, deformable, or in fact ablated or permanently changed by the action of the game. All of these things are possible by using the targeted objects and the implements such as described to pick up the point at which is the accurate measure of the contact.
  • a simple way to determine the existence of motion, and to calculate motion vectors with low cost tv cameras is to use the blur of a distinct target during the integration time of the camera. For example, in the TV Camera image 2601 there is a distinct datum 2605 . This is indicative of a LED or retro disc source on an object, for example, with background ignored (by setting an illumination or color threshold for example).
  • some blurring of target datums can be useful for subpixel resolution enhancement.
  • This can be motion blur, or blur due to a somewhat out of focus condition (effectively making a small luminous target in a large field of view look like a bigger, but less intense, blob covering more pixels).
  • Such a purposeful defocus could even be done with a piezo electric actuation of the camera lens or array chip position, to allow in-focus conditions when not actuated.
  • this filter could purposely be optically shaped to slightly defocus the system when used for target as opposed to scene viewing.
  • the sword tip position versus the screen image can alternatively be calculated from a knowledge of the part data base of the sword and 3 points to determine its position and orientation in space, plus a knowledge of where the projected image on the screen lies. This may require calibration in the beginning to for example project using the TV display, the computerized projection of a target point on the display screen, which can be viewed by the TV camera(s) of the invention, and used to set reference marks in space.
  • Screen generated targets can also be used to calibrate the field of view of the camera to take out lens errors and the like, and to adjust relationships between two cameras of a stereo pair (or even more sets of cameras).
  • targets useful in the invention may be designed of diffractive or holographic based material so as to provide, for example, directional and/or color based responses to light input. This can be used to recognize or identify targets, and for causing desirable light distribution on reflection which aid the detection process by a suitable camera
  • a wristwatch can contain high specific reflectivity retroreflective glass bead or corner cube material in its face or hand that can be sensed by the camera or cameras of the invention in order to easily find the wrist and hand in a field of view.
  • rings on the fingers containing such material can greatly aid the ability of the camera system to see the fingers and to get close enough such that relatively simple image processing can find the fingertips from the ring, or with more difficulty, from the wrist watch.
  • belt buckles, bracelets, pins, necktie clips and the like can all serve this purpose in a decorative and aesthetically pleasing manner.
  • ring 2801 having band 2802 and a “jewel” comprised of a corner cube retro-reflector 2803 , capable of very high contrast return signals to near on axis illumination.
  • the jewel could be a diamond (real or synthetic) cut to reflect light incident from many angles in somewhat similar manner.
  • ring 2815 having 5 corner cubes, 2826 - 2830 , each pointing in different directions, to allow operation from a variety of finger positions.
  • ring band 2840 comprised of a base ring, 2845 with retro-reflective bead tape material 2850 attached, and covered with a protective plastic overlay 2855 .
  • the overlay could be either totally transparent, or alternatively of band pass material, that would only allow reflection back of a specific wavelength band,(eg matching an LED illumination wavelength).
  • the user might chose to wear multiple rings each of a different color, which could be color identified. Or multiple users, each with a different color, say.
  • a special flat tape type retroreflector can be provided having a microprism grating or grille or a diffraction grating or grille on its face which directionally alters the incoming and outgoing radiation so as to be able to bee seen from more nominal angles than normal material such as Scotchlight 7615 of 3M company.
  • the retroreflection illumination light source is substantially coaxial with the optical axis of said tv camera when retro used
  • the LED as the preferred source to illuminate reflective targets
  • an LED If used, it has the advantage of low power requirement, self-luminous and of a known wavelength. This means that the camera can be filtered for this wavelength quite easily, although, if it is, it won't see other wavelengths very well by definition.
  • LED light sources for target illumination are preferable because of the programmability i.e. ease of turning on/off, or modulating on a given frequency or pulse duration and they are low cost and low energy consumption. Operating in the Infrared, they do not bother the user or non-visible wavelengths.
  • FIG. 1 a has illustrated a simplified version of the invention using even one retro-reflective item such as a ring, a thimble with a target on it, a snap on finger target, a color or retroreflective painted nail or other feature on the person.
  • the camera used for this is either a special camera dedicated to the task or shared with a video-imaging camera.
  • the LED light source (which in one embodiment is comprised of a ring of LEDs such as 26 around the camera Lens 24 , pointing outward at the subjects to be viewed) is turned on, and in one case, a bandpass filter (passing the LED wavelength) such as 25 is placed over the lens of the camera that might be normally used simply for acquiring images for Internet telephony or what have you.
  • This filter can be screwed, slid on or snapped on or any other way that allows it to be easily removed when non-filtered viewing is desired.
  • the LED's surrounding in this case in a ring arrangement, surrounding the lens, that is easily attached to the camera by suitable attachments either permanent or in some cases temporary. This is due to the wide variety of nature of cameras today or quasi-permanent via highly sticky adhesive.
  • the LED's are energized in the particular embodiment here and the LED's are near infrared operating at a wavelength 0.85 micron. They provide the illumination needed without being distracting to the user. Visible LED's are usable too if they dont distract the user. A filter on the front of the camera removes largely the effect of light outside of the wavelength of the illumination.
  • a question to answer is it required for the camera system to be used for both image production of the object and for viewing certain types of special targets, or can it be just devoted to the special target purpose?
  • the lighting is easier because there is only one issue to contend with; seeing the light reflected from the special target, which typically has high brightness, and /or high contrast or color contrast to its surroundings.
  • This can be done at specialized wave lengths, particularly of interest in the very near infrared (eg 0.75 to 0.9 microns wavelength) where strong LED's sources exist, which is visible to the cameras in general use, but which is not bothersome or unobtrusive to the user.
  • a special band pass filter transmissive to the LED, laser or other sufficiently monochromatic light source wavelength can be used to cover the camera lens.
  • the filter is conveniently provided with a chain, or preferably a sliding function, to slide in front of the lens when this function is needed. This function can be automated with, for example, a solinoid at added cost, to provide quick switching. Electronically switchable filters can also be used where faster switching is required.
  • one camera too can be a master, used for conventional images, with the other a slave used only for determining object location. It is noted that if the stereo pair are spaced roughly like the eyes (eg 6-8 inches apart) and pointing straight ahead or nearly so, that the image created can be used to drive a stereo display—this could be of considerable interest at the other end of an internet connection for example, where the other person could view the person being imaged in 3D using “Crystal eyes” or other brands of LCD glasses and appropriate Video displays.
  • the invention can use special datum's such as round or point source LED's, retro-reflective, or other contrasting material comprising spots or beading defining lines or edges, or it can use natural object features, such as fingertips hands, head, feet, or eyes. Often a judicious combination of natural and object features can be chosen to minimize special features and their application, but to make use of their ease of discovery at high speed in a large field of view. For example, if one finds a high contrast, perhaps specially colored artificial feature, one can reduce the search window in the field of view often to that immediate area around the feature for example, where other related natural (or artificial) features are likely to lie.
  • LEDs include diode lasers (including diode pumped lasers), superluminous devices and others.
  • each camera may see a somewhat different target shape as well. And its brightness can be different, as pointed out above. It is desirable to optimally detect each target datum in each separate stereo image first, before attempting to match images to determine where the datums coincide, which gives the z axis range.
  • a datum image can be compared with a pre-stored criteria, or previously observed results and indications to the operator or automatic signaling of alternate datum programs be made if conditions warrant.
  • a given range of motions of a object or person is not in the range of motions that has been programmed.
  • a warning to slow down can be given, or suggestions made to speed up the system, such as increase light intensity, target brightness, etc.
  • a motion first check could be done for example by waving ones arms in a certain way that would cause the computer to either register a particular user or the motion captured algorithm to be used or a speed parameter or anything to do with the camera and a light gathering.
  • a first user should go through a simple training or at least a setup routine where they did certain actions and movements and other things in the range that they expect to use and let the camera system set up to that where possible
  • Light as used herein, can be electromagnetic waves at x-ray through infra-red wavelengths.
  • a “target Volume” is the volume of space (usually a rectangular solid volume) visible to a video camera or a set of video cameras within which a target will be acquired and its position and/or orientation computed.
  • An “Interrupt member” is a device that senses a signal to the systems computer allowing a computer program to identify the beginning of one path of a target and the end of the preceding path. It can also identify a function, object, or parameter value. Examples of an Interrupt member are:
  • a voice recognition system capable of acting on a sound or spoken word.
  • a trigger, switch, dial, etc. that can turn on a light or mechanically make visible a new target or sub-target with unique properties of color, shape, and size.
  • a “Quant” is a unique discretized or quantized target path (defined by location, orientation, and time information) together with the target's unique identification number (ID).
  • ID unique identification number
  • a Quant has an associated ID (identification number).
  • ID identification number
  • a Quant is composed of a sequence of simple path segments.
  • An example of a Quant that could be used to define command in a CAD drawing system to create a rectangle might be a target sweep to the right punctuated with a short stationary pause followed by an up sweep and pause, a left sweep and pause, a down sweep and pause, and finally ended with a key press on the keyboard.
  • the Quant is stored as a set (4, 1, 2, 3, 4, a, 27) where 4 is the number of path segments, 1-4 are number that identify path segment directions (i.e. right, up, left, down), “a” is the member interrupt (the key press a), and 27 is the target ID. Note that the punctuation that identifies a new path direction could have been a radical change in path direction or target orientation or speed.
  • Light as used herein includes all electro-magnetic wavelengths from ultraviolet to near infrared

Abstract

The invention is aimed at providing affordable methods and apparatus for inputting position, attitude(orientation) or other object characteristic data to computers for the purpose of Computer Aided learning, Teaching, Gaming, Toys, Simulations, Aids to the disabled, Word Processing and other applications.
Preferred embodiments of the invention utilize electro-optical sensors, and particularly TV Cameras, providing optically inputted data from specialized datum's on objects and/or natural features of objects. Objects can be both static and in motion, from which individual datum positions and movements can be derived, also with respect to other objects both fixed and moving. Real-time photogrammetry is preferably used to determine relation ships of portions of one or more datums with respect to a plurality of cameras or a single camera processed by a conventional PC.

Description

    CROSS REFERENCES TO RELATED JOINT APPLICATIONS INCORPORATED BY REFERENCE
  • Provisional applications by Tim Pryor and Peter Smith [0001]
  • New man/machine interfaces and applications, filed Aug. 22, 1997 and [0002]
  • Novel Man machine interfaces and applications, filed Sep. 19, 1997 (docket number IV/PO5332USO) [0003]
  • Tim Pryor applications incorporated by reference herein [0004]
  • Man Machine Interfaces, filed Sep. 18, 1992 (Ser. No. 08/290,516) [0005]
  • Touch TV and other Man Machine Interfaces, filed 1995 (Ser. No. 08/496,908) [0006]
  • Systems for Occupant Position Sensing, Ser. No. 08/968,114 [0007]
  • Vision Target based assembly, U.S. Ser. Nos. 08/469,429, 08/469,907, 08/470,325, 08/466,294 [0008]
  • Federally sponsored R and D statement—not applicable [0009]
  • Microfiche Appendix—not applicable [0010]
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0011]
  • The invention relates to simple input devices for computers, well suited for use with 3-D graphically intensive activities, and operating by optically sensing object or human positions and/or orientations. The invention in many preferred embodiments, uses real time stereo photogrammetry using single or multiple TV cameras whose output is analyzed and used as input to a personal computer. [0012]
  • 2. Description of Related Art [0013]
  • The closest known references to the stereo photogrammetric imaging of datum's employed by several preferred embodiments of the invention are thought to exist in the fields of flight simulation, robotics, animation and biomechanical studies. Some early prior art references in these fields are [0014]
  • U.S. patents [0015]
  • Pugh U.S. Pat. No. [0016]
  • Birk U.S. Pat. No. 4,416,924 [0017]
  • Pinckney U.S. Pat. No. 4,219,847 [0018]
  • U.S. Pat. No. 4,672,564 by Egli et al, filed Nov. 15, 1984 [0019]
  • Pryor U.S. Pat. No. 5,506,682, robot vision using targets [0020]
  • Pryor, Method for Automatically Handling, Assembling & Working on Objects U.S. Pat. No. 4,654,949 [0021]
  • Pryor, U.S. Pat. No. 5,148,591, Vision target based assembly [0022]
  • In what is called “virtual reality”, a number of other devices have appeared for human instruction to a computer. Examples are head trackers, magnetic pickups on the human and the like, which have their counterpart in the invention herein. [0023]
  • References from this field having similar goals to some aspects of the invention herein are: [0024]
  • U.S. Pat. No. 5,297,061 by Dementhon et al [0025]
  • U.S. Pat. No. 5,388,059 also by Dementhon, et al [0026]
  • U.S. Pat. No. 5,168,531: Real-time recognition of pointing information from video, by Sigel [0027]
  • U.S. Pat. No. 5,617,312 Computer system that enters control information by means of video camera by Iura et al, filed Nov. 18, 1994 [0028]
  • U.S. Pat. No. 5,616,078: Motion-controlled video entertainment system, by Oh; Ketsu, [0029]
  • U.S. Pat. No. 5,594,469: Hand gesture machine control system, by Feeman, et al. [0030]
  • U.S. Pat. No. 5,454,043: Dynamic and static hand gesture recognition through low-level image analysis by Freeman; [0031]
  • U.S. Pat. No. 5,581,276: 3D human interface apparatus using motion recognition based on dynamic image processing, by Cipolla et al. [0032]
  • U.S. Pat. No. 4,843,568: Real time perception of and response to the actions of an unencumbered participant/user by Krueger, et al [0033]
  • Iura and Sigel disclose means for using a video camera to look at a operators body or finger and input control information to a computer. Their disclosure is generally limited to two dimensional inputs in an xy plane, such as would be traveled by a mouse used conventionally. [0034]
  • Dementhion discloses the use objects equipped with 4 LEDs detected with a single video camera to provide a 6 degree of freedom solution of object position and orientation. He downplays the use of retroreflector targets for this task. [0035]
  • Cipolla et al discusses processing and recognition of movement sequence gesture inputs detected with a single video camera whereby objects or parts of humans equipped with four reflective targets or leds are moved thru space, and a sequence of images of the objects taken and processed. The targets can be colored to aid discrimination [0036]
  • Pryor, one of the inventors, in several previous applications has described single and dual (stereo) camera systems utilizing natural features of objects or special targets including retroreflectors for determination of position and orientation of objects in real time suitable for computer input, in up to 6 degrees of freedom [0037]
  • Pinckney has described a single camera method for using and detecting 4 reflective targets to determine position and orientation of an object in 6 degrees of freedom. A paper by Dr. H. F. L. Pinckney entitled Theory and Development of an on [0038] line 30 Hz video photogrammetry system for real-time 3 dimensional control presented at the Symposium of Commission V Photogrammetry for Industry, Stockholm, August 1978, together with many of the references referred to therein gives many of the underlying equations of solution of photogrammetry particularly with a single camera. Another reference relating to use of two or more cameras, is Development of Stereo Vision for Industrial Inspection, Dr. S. F. El-Hakim, Proceedings of the Instrument Society of America (ISA) Symposium, Calgary Alta, Apr. 3-5, 1989. This paper too has several useful references to the photogrammetry art.
  • Generally speaking, while several prior art references have provided pieces of the puzzle, none has disclosed a workable system capable of widespread use, the variety and scope of embodiments herein, nor the breath and novelty of applications made possible with electro-optical determination of object position and/or orientation. [0039]
  • In this invention, many embodiments may operate with natural features, colored targets, self-illuminated targets such as LEDS, or with retroreflective targets. Generally the latter two give the best results from the point of view of speed and reliability of detection—of major importance to widespread dissemination of the technology. [0040]
  • However, of these two, only the retroreflector is both low cost, and totally unobtrusive to the user. Despite certain problems using same, it is the preferred type of target for general use, at least for detection in more than 3 degrees of freedom. Even in only two degrees, where standard “blob” type image processing might reasonably be used to find ones finger for example, (ef U.S. Pat. No. 5,168,531 by Sigel), use of simple glass bead based, or molded plastic corner cube based retroreflectors allows much higher frequency response (eg 30 Hz, 60 Hz, or even higher detection rates) from the multiple incidence angles needed in normal environments, also with lower cost computers under a wider variety of conditions—and is more reliable as well.(at least with todays PC processing power). [0041]
  • BRIEF SUMMARY OF THE INVENTION
  • Numerous 3D input apparatus exist today. As direct computer input for screen manipulation, the most common is the “Mouse” that is manipulated in x and y, and through various artifices in the computer program driving the display, provides some control in z-axis. In 3 dimensions (3-D) however, this is indirect, time consuming, artificial, and requires considerable training to do well. Similar comments relate to joysticks, which in their original function were designed for input of two angles. [0042]
  • In the computer game world as well; the mouse, joy stick and other 2D devices prevail today. [0043]
  • The disclosed invention is optically based, and generally uses unobtrusive specialized datum's on, or incorporated within, an object whose 3D position and/or orientation is desired to be inputted to a computer. Typically such datums are viewed with a single tv camera, or two tv cameras forming a stereo pair. A preferred location for the camera(s) is proximate the computer display, looking outward therefrom, or to the top or side of the human work or play space. [0044]
  • While many aspects of the invention can be used without specialized datum's (e.g. a retroreflective tape on ones finger, versus use of the natural finger image itself), these specialized datum's have been found to work more reliably, and at lowest cost using technology which can be capable of wide dissemination in the next few years. This is very important commercially. Even where only two-dimensional position is desired, such as x, y location of a finger tip, this is still the case. [0045]
  • For degrees of freedom beyond 3, we feel such specialized datum based technology is the only practical method today. Retroreflective glass bead tape, or beading, such as composed of Scotchlite 7615 by 3M co., provides a point, line, or other desirably shaped datum which can be easily attached to any object desired, and which has high brightness and contrast to surroundings such as parts of a human, clothes, a room etc, when illuminated with incident light along the optical axis of the viewing optics such as that of a TV camera. This in turn allows cameras to be used in normal environments, and having fast integration times capable of capturing common motions desired, and allows datums to be distinguished easily which greatly reduces computer processing time and cost. [0046]
  • Retroreflective or other datums are often distinguished by color or shape as well as brightness. Other target datums suitable can be distinguished just on color or shape or pattern, but do not have the brightness advantage offered by the retro. Suitable Retroreflectors can alternatively be glass, plastic or retroreflective glass bead paints, and can be other forms of retroreflectors than beads, such as corner cubes. But the beaded type is most useful. Shapes of datums found to be useful have been for example dots, rings, lines, edge outlines, triangles, and combinations of the foregoing, [0047]
  • It is a goal of this invention to provide a means for data entry that has the following key attributes among others: [0048]
  • Full 3D (up to 6 degrees of freedom, eg x, y, z, roll, pitch, yaw) real time dynamic input using artifacts, aliases, portions of the human body, or combinations thereof [0049]
  • Very low cost, due also to ability to share cost with other computer input functions such as document reading, picture telephony, etc. [0050]
  • Generic versatility—can be used for many purposes, and saves as well on learning new and different systems for those purposes. [0051]
  • Unobtrusive to the user [0052]
  • Fast response, suitable for high speed gaming as well as desk use. [0053]
  • Compatible as input to large screen displays—including wall projections [0054]
  • Unique ability to create physically real “Alias” or “surrogate” objects [0055]
  • Unique ability to provide realistic tactile feel of objects in hand or against other objects, without adding cost [0056]
  • A unique ability to enable “Physical” and “Natural” experience. It makes using computers fun, and allows the very young to participate. And it radically improves the ability to use 3D graphics and CAD systems with little or no training. [0057]
  • An ability to aid the old and handicapped in new and useful ways. [0058]
  • An abiltiy to provide meaningful teaching and other experiences capable of reaching wide audiences at low cost [0059]
  • An ability to give life to a childs imagination thru the medium of known objects and software, with out requiring high cost toys, and providing unique learning experiences. [0060]
  • What is also unique about the invention here disclosed is that it unites all of the worlds above, and more besides, providing the ability to have a common system that serves all purposes well—at lowest possible cost and complexity. [0061]
  • The invention has a unique ability to combine what amounts to 3D icons (physical artifacts) with static or dynamic gestures or movement sequences. This opens up, among other things, a whole new way for people, particularly children, beginners and those with poor motor or other skills to interact with the computer. By manipulating a set of simple tools and objects that have targets appropriately attached, a novice computer user can control complex 2D and 3D computer programs with the expertise of a child playing with toys![0062]
  • The invention also acts as an important teaching aide, especially for small children and the disabled, who have undeveloped motor skills. Such persons can, with the invention, become computer literate far faster than those using conventional input devices such as a mouse. The ability of the invention to use any desired portion of a human body, or an object in his command provides a massive capability for control, which can be changed at will. In addition, the invention allows one to avoid carpal tunnel syndrome and other effects of using keyboards and mice. One only needs move through the air so to speak, or with ergonomically advantageous artifacts. [0063]
  • The system can be calibrated for each individual to magnify even the smallest motion to compensate for handicaps or enhance user comfort or other benefits.(eg trying to work in a cramped space on an airplane). If desired, unwanted motions can be filtered or removed using the invention. (in this case a higher number of camera images than would normally be necessary is typically taken, and effects in some frames averaged, filtered or removed altogether). [0064]
  • The invention also provides for high resolution of object position and orientation at high speed and at very low or nearly insignificant cost. And it provides for smooth input functions without the jerkiness of mechanical devices such as a sticking mouse of the conventional variety. [0065]
  • In addition, the invention can be used to aid learning in very young children and infants by relating gestures of hands and other bodily portions or objects (such as rattles or toys held by the child), to music and/or visual experiences via computer generated graphics or real imagery called from a memory such as DVD disks or the like. [0066]
  • The invention is particularly valuable for expanding the value of life-size, near life size, or at least large screen (eg. greater than 42 inches diagonal) TV displays. Since the projection can now be of this size at affordable cost, the invention allows an also affordable means of relating in a lifelike way to the objects on the screen—to play with them, to modify them, and other wise interrelate using ones natural actions and the naturally appearing screen size—which can also be in 3D using stereo display techniques of whatever desired type.[0067]
  • DESCRIPTION OF FIGURES
  • FIG. 1 illustrates basic sensing useful in practicing the invention [0068]
  • FIG. 1[0069] a illustrates a basic two dimensional embodiment of the invention utilizing one or more retroreflective datums on an object, further including means to share function with normal imaging for internet teleconferencing or other activities.
  • FIG. 1[0070] b illustrates a 3 Dimensional embodiment using single camera stereo with 3 or more datums on an object or wrist of the user.
  • FIG. 1[0071] c illustrates another version of the embodiment of FIG. 1a, in which two camera “binocular” stereo cameras are used to image an artificial target on the end of a pencil. Additionally illustrated is a 2 camera stereo and a line target plus natural hole feature on an object.
  • FIG. 1[0072] d illustrates a control flow chart of the invention
  • FIG. 1[0073] e is a flow chart of a color target processing embodiment
  • FIG. 2 illustrates Computer aided design system (CAD) related embodiments [0074]
  • FIG. 2[0075] a Describes a illustrates a first CAD embodiment according to the invention, and a version for 3-D digitizing and other purposes
  • FIG. 2[0076] b describes another Computer Design embodiment with tactile feedback for “whittling ” and other purposes
  • FIG. 3 illustrates additional embodiments working virtual objects, and additional alias objects according to the invention [0077]
  • FIG. 4 illustrates a car driving game embodiment of the invention, which in addition illustrates the use of target-based artifacts and simplified head tracking with viewpoint rotation. The car dash is for example a plastic model purchased or constructed to simulate a real car dash, or can even be a make-believe dash (ie in which the dash is made from for example a board, and the steering wheel from a dish), and the car is simulated in its actions via computer imagery and sounds [0078]
  • FIG. 5 illustrates a one or two person airplane game according to the invention, to further include inputs for triggering and scene change via movement sequences or gestures of a player. Also illustrated in FIG. 5[0079] c is a hand puppet game embodiment of the invention played if desired over remote means such as the Internet
  • FIG. 6 illustrates other movements such as gripping or touch which can be sensed by the invention indicating which can be useful as input to a computer system, for the purpose of signaling that a certain action is occurring [0080]
  • FIG. 7 illustrates further detail as to the computer architecture of movement sequences and gestures, and their use in computer instruction via video inputs. Also illustrated are means to determine position and orientation parameters with minimum information at any point in time. [0081]
  • FIG. 8 illustrates embodiments, some of which are a simulation analog of the design embodiments above, used for Medical or dental teaching and other applications. [0082]
  • FIG. 8[0083] a illustrates a targeted scalpel used by a medical student for simulated surgery, further including a compressible member for calculating out of sight tip locations
  • FIG. 8[0084] c illustrates targeted instruments and targeted body model
  • FIG. 8[0085] d illustrates a body model on a flexible support
  • FIG. 8[0086] e illustrates a dentist doing real work with a targeted drill
  • FIG. 8[0087] f shows how a surgeon can control the manipulation of a laproscopic tool or a robot tool through the complex 3D environment of a body with the help of a targeted model of a body as an assembly of body parts.
  • FIG. 8[0088] g is another embodiment
  • FIG. 9 illustrates a means for aiding the movement of persons hands while using the invention in multiple degree of freedom movement [0089]
  • FIG. 10 illustrates a natural manner of computer interaction for aiding the movement of persons hands while using the invention in multiple degree of freedom movement with ones arms resting on a armrest of a chair, car, or the like [0090]
  • FIG. 11 illustrates coexisting optical sensors for other variable functions in addition to image data of scene or targets. A particular illustration of a Level vial in a camera field of view illustrates as well the establishment of a coordinate system reference for the overall 3-6 degree of freedom coordinate system of the camera(s). [0091]
  • FIG. 12 illustrates a touch screen employing target inputs from fingers or other objects in contact or virtual contact with the screen, either of the conventional CRT variety, an LCD screen, or a projection screen—including aerial projection in space. Calibration or other functions via targets projected on the screen is also disclosed. [0092]
  • FIG. 13 illustrates clothes design using preferred embodiments incorporating finger touch, laser pointing and targeted material. [0093]
  • FIG. 14 illustrates additional applications of alias objects such as those of FIG. 3, for purposes of planning visualization, building toys, and inputs in general. [0094]
  • FIG. 15 illustrates a sword play and pistol video game play of the invention using life size projection screens, with side mounted stereo camera and head tracking audio system (and/or tv camera/light source tracker) [0095]
  • FIG. 16 illustrates an embodiment of the invention having a mouse and/or keyboard of the conventional variety combined with a targets of the invention on the user to give an enhanced capability even to a conventional word processing or spreadsheet, or other program. A unique portable computer for use on airplanes and elsewhere is disclosed [0096]
  • FIG. 17 illustrates a optically sensed keyboard embodiment of the invention, in this case for a piano [0097]
  • FIG. 18 illustrates gesture based musical instruments such as violins and virtual object musical instruments according to the invention, having synthesized tones and, if desired, display sequences. [0098]
  • FIG. 19 illustrates a method for entering data into a CAD system used to sculpt a car body surface. [0099]
  • FIG. 20 illustrates an embodiment of the invention used for patient or baby monitoring [0100]
  • FIG. 21 illustrates a simple embodiment of the invention for toddlers and preschool age children, which is also useful to aid learning in very young children and infants by relating gestures of hands and other bodily portions or objects such as rattles held by the child, to music and/or visual experiences. [0101]
  • FIG. 22 illustrates the use of a PSD (position sensitive photodiode)based image sensor rather than, or in conjunction with, a tv camera. Two versions are shown, A single point device, with retro-reflective illumination, or with a battery powered LED source, and a multi-point device with LED sources. A combination of this sensor and a TV camera is also described., as is an alternative using fiber optic sources [0102]
  • FIG. 23 illustrates inputs to instrumentation and control systems, for example those typically encountered in car dashboards to provide added functionality and to provide an aide to drivers, including the handicapped [0103]
  • FIG. 24 illustrates means for simple “do it yourself” object creation using the invention [0104]
  • FIG. 25 illustrates a game experience with an object represented on a deformable screen. [0105]
  • FIG. 26 illustrates the use of motion blur to determine the presence of movement or calculate movement vectors [0106]
  • FIG. 27 illustrates retro-reflective jewelry and makeup according to the invention[0107]
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1[0108] a
  • FIG. 1[0109] a illustrates a simple single camera based embodiment of the invention. In this case, a user 5, desires to point at an object 6 represented electronically on the screen 7 and cause the pointing action to register in the software contained in computer 8 with respect to that object (a virtual object), in order to cause a signal to be generated to the display 7 to cause the object to activate or allow it to be moved, (eg with a subsequent finger motion or otherwise). He accomplishes this using a single TV camera 10 located typically on top of the screen as shown or alternatively to the side (such as 11) to determine the position of his fingertip 12 in space, and/or the pointing direction of his finger 13.
  • It has been proposed by Sigel and others to utilize the natural image of the finger for this purpose and certain US patents address this in the group referenced above. Copending applications by one of the inventors (Tim Pryor) also describe finger related activity. [0110]
  • As disclosed in said co-pending application, it is however, often desirable to use retro-reflective material on the finger, disclosed herein as either temporarily attached to the finger as in jewelry or painted on the finger using retro-reflective coating “nail polish” or adhered to the finger such as with adhesive tape having a retro-reflective coating. Such coatings are typically those of Scotchlite 7615 and its equivalent that have high specific reflectivity, contrasting well to their surroundings to allow easy identification. The brightness of the reflection allows dynamic target acquisition and tracking at lowest cost. [0111]
  • The camera system employed for the purposes of low cost desirable for home use is typically that used for Internet video conferencing and the like today. These cameras are CCD's and more recently CMOS, cameras having low cost (25-100 dollars) yet relatively high pixel counts and densities. It is considered that within a few years these will be standard on all computers, for all intents and purposes, “free” to the applications here proposed, and interfaced via “fire wire” (IEEE 1394) or USB (universal serial bus). [0112]
  • The use of retroreflective and/or highly distinctive targets (eg bright orange triangles) allows reliable acquisition of the target in a general scene, and does not restrict the device to pointing on a desktop application under controlled lighting as shown in Sigel or others. Active (self luminous) targets such as LEDS also allow such acquisition, but are more costly, cumbersome and obtrusive and generally less preferable. [0113]
  • If we consider camera system [0114] 10 sitting on top of the screen 7 and looking at the user or more particularly, the user's hand, in a normal case of Internet telephony there is a relatively large field of view so that the user's face can also be seen. This same field of view can be used for this invention but it describes a relatively large volume. For higher precision, add-on lenses or zoom lenses on the camera may be used to increase the resolution.
  • Or it is possible according to the invention to have a plurality of cameras, one used for the Internet and the other used for the input application here described. Indeed with the ever dropping prices, the price of the actual camera including the plastic lens on the CMOS chip is so low, it is possible perhaps even to have multiple cameras with fixed magnifications, each having a separate chip![0115]
  • These can easily be daisy chained with either fire wire or USB such that they can either be selected at will electronically in fact by the different magnifications or pointing directions desired [0116]
  • Let us now return now to the question of determining location or orientation of a human portion such as typically a hand, or finger—in this case, a finger. In order to make this invention operate in the lowest possible cost it is desirable that the lighting available be low cost as well. Indeed if the camera units are shared with telephony using the natural lighting of the object, then the cost of specialized lighting required for the retro-reflectors adds cost to the system. The power for the lighting, such as LEDs can generally be conveyed over the USB or 1394 bus however. The user can also point or signal with an object such as [0117] 15 having datum 16 on it, such as a retroreflective dot 16 or line target 17.
  • It is possible to expand the sensing of 2D positions described above into 3, 4, 5 and 6 dimensions.(x,y plus z, pitch, yaw, roll). Two sensing possibilities of the many possible, are described in various embodiments here in. [0118]
  • 1. The first, illustrated in FIGS. 1[0119] a and b is to utilize a single camera, but multiple discrete features or other targets on the object which can provide a multidegree of freedom solution. In one example, the target spacing on the object is known apriori and entered into the computer manually or automatically from software containing data about the object, or can be determined through a taught determining step.
  • 2. The second is a dual camera solution shown in FIGS. 1[0120] c and d that does not require a priori knowledge of targets and in fact can find the 3D location of one target by itself, useful for determining finger positions for example. For 6-degree freedom of information, at least three point, targets are required, although line targets, and combinations of lines and points can also be used.
  • FIG. 1[0121] b illustrates a 3-D (3 Dimensional) sensing embodiment using single camera stereo with 3 or more datums on a sensed object, or in another example, the wrist of the user.
  • As shown the user holds in his right hand [0122] 29, object 30 which has at least 3 visible datums 32, 33, and 34 which are viewed by TV camera 40 whose signal is processed by computer 41 which also controls projection display 42. TV camera 40 also views 3 other datums 45, 46 and 47, on the wrist 48 of the users left hand, in order to determine its orientation or rough direction of pointing of the left hand 51, or its position relative to object 30, or any other data (eg relation to the screen position or other location related to the mounting position of the TV camera, or to the users head if viewed, or what ever. The position and orientation of the object and hand can be determined from the 3 point positions in the camera image using known photogrammetric equations (see Pinckney, reference U.S. Pat. No. 4,219,847 and other references in papers referenced).
  • Alternatively to the 3 discrete point target, a colored triangular target for example can be used in which the intersections of lines fitted to its sides define the target datums, as discussed below [0123]
  • It is also possible to use the [0124] camera 40 to see other things of interest as well. For the direction of pointing of the user at an object 55 represented on display 42 is determine for example datum 50 on finger 52 of users left hand 51 (whose wrist position and attitude can be also determined).
  • Alternatively, the finger can be detected just from its general gray level image, and can be easily identified in relation to the targeted wrist location (especially if the user, as shown, has clenched his other fingers such that the finger [0125] 52 is the only one extended on that hand).
  • The computer can process the gray level image using known techniques, for example blob and other algorithms packaged with the Matrox brand Genesis image processing board for the PC, and determine the pointing direction of the finger using the knowledge of the wrist gained from the datums. This allows the [0126] left hand finger 50 to alternatively point at a point (or touch a point) to be determined on the object 30 held in the right hand as well.
  • FIG. 1[0127] c
  • FIG. 1[0128] c illustrates another version of the embodiments of FIGS. 1a and b, in which two camera “binocular” stereo cameras 60 and 61 processed by computer 64 are used to image artificial target (in this case a triangle, see also FIG. 2), 65, on the end of pencil 66, and optionally to improve pointing resolution, target 67 on the tip end of the pencil, typically a known small distance from the tip. (the user and his hand holding the pencil is not shown for clarity. This imaging allows one to track the pencil tip position in order to determine where on the paper (or tv screen, in the case of a touch screen) the pencil is contacting. (see also FIG. 2, and FIG. 12).
  • For best results it is often desirable to have independently controllable near coaxial light sources [0129] 62 and 63 are shown controlled by computer 64 to provide illumination of retroreflective targets for each camera independently. This is because at different approach angles the retroreflector reflects differently, and since the cameras are often angularly spaced (eg by non-zero angle A), they do not see a target the same.
  • Numerous other camera arrangements, processing, computation, and other issues are discussed in general relative to accurate determination of object positions using two or more camera stereo vision systems in the S. F. El Hakim paper referenced above and the additional references referred to therein. [0130]
  • The computer can also acquire the stereo image of the paper and the targets in its four corners, [0131] 71-74. Solution of the photogrammetric equation allows the position of the paper in space relative to the cameras to be determined, and thence the position of the pencil, and particularly its tip, to the paper, which is passed to display means 75 or another computer program. Even with out the target on the end, the pointing direction can be determined from target 65 and knowing the length of the pencil the tip position calculated
  • A line target [0132] 76 can also be useful on the pencil, or a plurality of line targets spaced circumferentially, can also be of use in defining the pencil pointing direction from the stereo image pair.
  • A working volume of the measurement system is shown in dotted [0133] lines 79—that is the region on and above the desk top in this case where the sensor system can operate effectively. Typically this is more than satisfactory for the work at hand.
  • It is noted that the dual (Stereo pair) camera system of FIG. 1 has been extensively tested and can provide highly accurate position and orientation information in up to 6 degrees of freedom. One particular version using commercial CCD Black and white cameras and a Matrox “Genesis” framegrabber and image processing board, and suitable stereo photogrammetry software running in an Intel Pentium 300 MHZ based computer, has characteristics well suited to input from a large desktop CAD station for example. This provides 30 Hz updates of all 6 axes (x y z roll pitch and yaw) data over a working volume of 0.5 meter×0.5 meter in x and y (the desktop, where cameras are directly overhead pointing down at the desk) and 0.35 meters in z above the desk, all to an accuracy of 0.1 mm or better, when used with clearly visible round retroreflective (scotchlite 7615 based) datums approx. 5-15 mm in diameter on an object for example. This is accurate enough for precision tasks such as designing objects in 3D cad systems, a major goal of the invention [0134]
  • The cameras in this example are mounted overhead. If mounted to the side or front, or at an angle such as 45 degrees to the desktop, the z axis becomes the direction outward from the cameras. [0135]
  • FIG. 1[0136] c additionally illustrates 2 camera stereo arrangement, used in this case to determine the position and orientation of an object having a line target, and a datum on a portion of the user. Here, cameras 60 and 61 are positioned to view a retro-reflective line target 80 in this case running part of the length of a toy sword blade 81. The line target in this case is made as part of the plastic sword, and is formed of molded in corner cube reflectors similar to those in a tail light reflector on a car. It may also made to be one unique color relative to the rest of the sword, and the combination of the two gives an unmistakable indication.
  • There are typically no other bright lines in any typical image when viewed retroreflectively. This also illustrates how target shape (ie a line) can be used to discriminate against unwanted other glints and reflections which might comprise a few bright pixels worth in the image. It is noted that a line type of target can be cylindrical in shape if wrapped around a cylindrical object, which can be viewed then from multiple angles. [0137]
  • Matching of the two camera images and solution of the photogrammetric equations gives the line target pointing direction. If an additional point is used, such as [0138] 82 the full 6 degree of freedom solution of the sword is available. Also shown here is yet another point, 83, which serves two purposes, in that it allows an improved photogrammetric solution, and it serves as a redundant target in case 82 cant be seen, due to obscuration, obliteration, or what have you.
  • This data is calculated in computer [0139] 64, and used to modify a display on screen 75 as desired, and further described in FIG. 15.
  • In one embodiment a matrox genesis frame processor card on an IBM 300 mhz PC was used to read both cameras, and process the information at the camera frame rate of 30 HZ. Such line targets are very useful on sleeves of clothing, seams of gloves for pointing, rims of hats, and other decorative and practical purposes for example for example outlining the edges of objects or portions thereof, such as holes and openings. [0140]
  • Typically the [0141] cameras 60 and 61 have magnifications and fields of view which are equal, and overlap in the volume of measurement desired. The axes of the cameras can be parallel, but for operation at ranges of a few meters or less, are often inclined at an acute angle A with respect to each other, so as to increase the overlap of their field of view—particularly if larger baseline distances d are used for increased accuracy (albeit with less z range capability.). For example for a cad drawing application, A can be 30-45 degrees, with a base line of 0.5 to 1 meter. Where as for a video game such as FIG. 5, where z range could be 5 meters or more, the angle A and the base line would be less, to allow a larger range of action.
  • Data base [0142]
  • The datums on an object can be known a priori relative to other points on the object, and to other datums, by selling or other wise providing the object designed with such knowledge to a user and including with it a CD ROM disc or other computer interfacable storage medium having this data. Alternatively, the user or someone, can teach the computer system this information. This is particularly useful when the datums are applied by the user on arbitrary objects. [0143]
  • FIG. 1[0144] d
  • Illustrated here are steps used in the invention relating to detection of a single point to make a command, in this case, the position (or change of position, ie movement) of a finger tip in FIG. 12 having retroreflective target attached [0145] 1202 detected by stereo pair of TV cameras 1210, using detection algorithm which in its simplest case is based on thresholding the image to see only the bright target indication from the finger (and optionally, any object associated therewith such as a screen to be touched for example).
  • If this is insufficient to unambiguously defined the datum on the finger, added algorithms may be employed which are themselves known in the art (many of which are commonly packaged with image analysis frame grabber boards such as the matrox genesis. The processes can include for example [0146]
  • A brightness detection step relative to surroundings, or to immediate surroundings contrast) [0147]
  • a shape detection step, in which a search for a shape is made, such as a circle, ring, triangle, etc. [0148]
  • a color detection step, where a search for a specific color is made [0149]
  • a movement step, wherein only target candidates which have moved from a location in a previous tv image are viewed, [0150]
  • Each step, may process only those passing the previous step, or each may be performed independently, and the results compared later. The orders of these steps can be changed but each adds to further identify the valid indication of the finger target. [0151]
  • Next the position of the targeted finger is determined by comparing the difference in location of the finger target in the two camera images of the stereo pair. There is no matching problem in this case, as a single target is used, which appears as only one found point in each image. [0152]
  • After the Image of finger (or other tool) tip is found, its location is computed relative to the screen or paper, and this data is inputed to the computer controlling the display to modify same, for example the position of a drawing line, an icon, or to determine a vector of movement on the screen. [0153]
  • Motion Detection [0154]
  • The [0155] computer 8 can be used to analyze incoming TV image based signals and determine which points are moving in the image This is helpful to eliminate background data which is stationary, since often times only moving items such as a hand or object are of interest. In addition, the direction of movement is in many cases the answer desired or even the fact that a movement occurred at all.
  • A simple way to determine this is to subtract an image of retroreflective targets of high contrast from a first image—and just determine which parts are different—essentially representing movement of the points. Small changes in lighting or other effects are not registered. There are clearly more sophisticated algorithms as well. [0156]
  • Motion pre processing is useful when target contrast is not very high, as it allows one to get rid of extraneous regions and concentrate all target identification and measurement processing on the real target items. [0157]
  • Such processing is also useful when two camera stereo is used, as only moving points are considered in image matching—a problem when there are lots of points in the field. [0158]
  • Can it be assumed that the object is moving? The answer is yes if it's a game or many other activities. However there may be a speed of movement of issue. Probably frame to frame is the criteria, in a game, namely 30 Hz for a typical camera. However, in some cases movement might be defined as something much slower—eg 3 hz. for a CAD system input using deliberate motion of a designer. [0159]
  • Once the moving datum is identified, then the range can be determined and if the object is then tracked even if not moving from that point onward, the range measurement gives a good way to lock onto the object using more than just 2 dimensions. [0160]
  • One might actually use an artificial movement of the target if one dosnt naturally exist. This could be done by causing it to vibrate If a one or more LEDs is used as a target, they can be made to blink, which also shows up in an image subtraction (image with led on, vs image with led off. The same is true of a target which changed color, showing up in subtraction of color images. [0161]
  • Image subtraction or other computer processing operations can also be useful in another sense. One can also subtract background, energizing the retroreflective illuminatin light with no retroreflective targets present, and then with them. One idea is simply to take a picture of a room or other work space, and then bring in the targeted object. That would seem pretty simple to subtract or whatever. And the net result is that any bright features in the space which are not of concern, such as bright door knobs, glasses, etc are eliminated from consideration. [0162]
  • This can also be done with colored targets, doing a color based image subract—especially useful when one knows the desired colors aprioi (as one would, or could, via a teach mode) [0163]
  • A flow chart is shown in FIG. 1[0164] d illustrating the steps as follows:
  • A. Acquire images of stereo pair [0165]
  • B. Optionally preprocess images to determine if motion is present. If so, pass to next step otherwise do not or do anyway (as desired) [0166]
  • C. Theshold images [0167]
  • D. If light insufficient, change light or other light gathering parameter such as integration time [0168]
  • E. Identify target(S) [0169]
  • F. If not identifiable, add other processing steps such as a screen for target color, shape, or size [0170]
  • G. Determine centroid or other characteristic of target point (in this case a retro dot on finger) [0171]
  • H. Perform auxiliary matching step if required [0172]
  • I. Compare location in stereo pair to determine range z and x y location of target (s) [0173]
  • J. Auxiliary step of determining location of targets on screen if screen position not known to computer program. Determine via targets on screen housing or projected on to screen for example [0174]
  • K. Determine location of target relative to screen [0175]
  • L. Determine point in display program indicated [0176]
  • M. Modify display and program as desired. [0177]
  • The simple version of the invention here disclosed answers several problems experienced in previous attempts to implement such inputs to computers [0178]
  • 1. Computationally intensive [0179]
  • 2. Latency (frequency response, time to get position or orientation answer) [0180]
  • 3. Noise (unreliability caused by ambient electronic, processing, or other conditions) [0181]
  • 4. Lighting (unreliability caused by ambient illumination, processing, or other conditions) [0182]
  • 5. Initialization [0183]
  • 6. Background problems, where the situation background cannot be staged, as in a cad system input on a desk. [0184]
  • It particularly achieves this simply and at low cost because of the function of the retroreflector targets used, which help answer all 6 needs above. When combined with color and/or shape detection, the system can be highly reliable fast and low cost. In some more controlled cases, having slower movements and more uniform backgrounds for example, retro material is not needed. [0185]
  • FIG. 1[0186] e
  • The following is a multi-degree of freedom image processing description of a triangular shaped color target (disclosed itself in several embodiments of the invention herein) which can be found optically using one or more cameras to obtain the 3 dimensional location and orientation of the target using a computer based method described below. It uses color processing to advantage, as well as a large number of pixels for highest resolution, and is best for targets that are defined by a large number of pixels in the image plane, typically because the target is large, or the cameras are close to the target, or the camera field is composed of a very large number of pixels. The method is simple but unique in that it can be applied 1) in a variety of degrees to increase the accuracy (albeit at the expense of speed), 2) with 1 or more cameras (more cameras increase accuracy), 3) it can utilize the combination of the targets colors and triangles, (1 or more) to identify the tool or object. It utilizes the edges of the triangles to obtain accurate subpixel accruacy. A triangle edge can even have a gentle curve and the method will still function well. The method is based on accurately finding the 3 vertices (F[0187] 0,G0,F1,G1,F2,G2) of each triangle in the camera field by accurately defining the edges and then computing the intersection of these edge curves rather than finding 3 or 4 points from spot centroids.
  • The preferred implementation uses 1 or more color cameras to capture a target composed of a brightly colored right triangle on a rectangle of different brightly colored background material. The background color and the triangle color must be two colors that are easily distinguished from the rest of the image. For purposes of exposition we will describe the background color as a bright orange and the triangle as aqua. [0188]
  • By using the differences between the background color and the triangle color, the vertices of the triangle can be found very accurately. If there are more than one triangle on a target, a weighted average of location and orientation information can be used to increase accuracy. The method starts searching for a pixel with the color of the background or of the triangle beginning with the pixel location of the center of the triangle from the last frame. Once a pixel with the triangle “aqua” color is found, the program marches in four opposite directions until each march detects a color change indicative of an edge dividing the triangle and the “orange” background. Next, the method extrends the edges to define three edge lines of the triangle with a least squares method. The intersection points of the resulting three lines are found, and serve as rough estimates of the triangle vertices. These can serve as input for applications that don't require high accuracy. [0189]
  • If better accuracy is desired, these provisional lines are then used as a starting point for the subpixel refinement process. Each of these 3 lines is checked to see if it is mainly horizontal. If a line is mainly horizontal, then a new line will be determined by fiting a best fit of a curve through the pixel in each column that straddles the provisional line. If a line is mainly vertical, then the same process proceeds on rows of pixels. [0190]
  • The color of each pixel crossed by a line is translated into a corresponding numeric value. A completely aqua pixel is would receive the [0191] value 0, while a completely orange pixel would receive the value 1. All others colors produce a number between 0 and 1, based on their relative amounts of aqua and orange. This numeric value, V, assigned to a pixel is a weighted average of the color components (such as the R, G, B values) of the pixel. If the components of the calibrated aqua are AR, AG, AB and those of orange are OR, OG, OB, and the pixel components are PR, PG, PB, then the numeric value V is:
  • V=WR*CR+WG*CG+WB*CB
  • With WR, WG, WB being weighting constants between 0 and 1 and CR is defined as: [0192]
  • A flow chart is shown in FIG. 2[0193] a
  • The same process can be used to define CG and CB. [0194]
  • This value V is compared with the ideal value U which is equal to the percentage of orangeness calculated assuming the angle of the provisional line is the same as that of the ideal line. For example, a pixel which is crossed by the line in the exact middle would have a U of 0.5, since it is 50% aqua and 50% orange. A fit of U-V in the column (or row) in the vicinity of the crossing of the provisional line gives a new estimate of the location of the true edge crossing. Finally, the set of these crossing points can be fit with a line or gentle curve for each of the three edges and the 3 vertices can be computed from the intersections of these lines or curves. [0195]
  • We can now use these three accurate vertices in the camera plane (F[0196] 0,G0,F1,G1,F2,G2) together with lens formula (here we will use the simple lens formula for brevity) to relate the x and y of the target to F and G
  • F=λX/Z; G=λY/Z
  • λ is the focal length and z is the perpendicular distance from the lens to a location on the target. A triangle on the target is initially defined as lying in a plane parallel to the lens plane. The preferred configuration has one right triangle whose right angle is defined at x[0197] 0, y0, z0 with one edge (of length A) extending along the direction of the F axis of the camera and with the other edge (of length B) extending along the direction of the G axis of the camera. The actual target orientation is related to this orientation with the use of Euler Angles φ, θ, ψ. Together with the lens equations and the Euler equations, the 6 derived data values of the 3 vertices (F0, G0, F1, G1, F2, G2) can be used to define 6 values of location and orientaion of the target. The location and orientation of a point of interest on any tool or object rigidly attached to this target can be easily computed from calibration data and ordinary translation and rotation transformations. Refinements to handle lens distortions can be handled by forming a correction function with calibration data that modifies the locations of the F and G data.
  • The Euler formulation is nonlinear. We linearize the equations by assuming initially that the angles have not changed much since the last video frame. Thus we replace φ with φ(old)+U[0198] 1θ with θ(old)+U2, ψ with ψ(old)+U3, and z0 with z0(old)+U4 or:
  • φ=φ+U1
  • θ=θ+U2
  • ψ=ψ+U3
  • z0=z0+U4
  • Substituting these into the Euler equations and applying the lens formulas leads to a matrix equation[0199]
  • S U=R
  • that can be solved for the U values with a standard methods such as Gauss Jordan routine. The angles and z[0200] 0 can be updated iteratively until convergence is achieved. The coefficients of the matrix are defined as:
  • s11=−A(cos(φ)(F1/λ cos (ψ)+sin(ψ))−sin (φ)cos (θ)(F1/λ sin (ψ)−cos (ψ)))
  • s12=A sin(θ)cos(φ)(F1/λ sin(ψ)−cos(ψ)
  • s13=A(sin(φ)(F1/λ sin(ψ)−cos(ψ))−cos(φ)cos(θ)(F1/λ cos(ψ)−sin(ψ)))
  • s14=(F0−F1)/λ
  • s21=A(G1/λ(−cos(φ)*cos(ψ)+sin(φ)sin(ψ)cos(θ))+sin(θ)sin(φ))
  • s22=A cos(φ)(G1/λ sin(θ)sin (ψ)−cos(θ))
  • s23=G1/λA(sin(ψ)sin(φ)−cos(ψ)cos(θ)cos(φ))
  • s24=(G0−G1)/λ
  • s31=0
  • s32=−B cos(θ)(F2/λ sin(ψ)−cos(ψ))
  • s33=−B sin(θ)(F2/λ cos(ψ)+sin(ψ))
  • s34=(F0−F2)/λ
  • s41=0
  • s42=−B(G2/λ sin(ψ)cos(θ)+sin(θ))
  • s43=− B G 2/λ sin(θ)cos(ψ)
  • s44=(G0−G2)/λ
  • and the right hand side vector is defined as:[0201]
  • r1=(F1−F0)z0/λ+A(F1/λ(cos(ψ)sin(φ)+cos(θ)cos(φ)sin(ψ))sinψ)sin(φ)−cos(θ)cos(φ)cos(ψ))
  • r2=(G1−G0)z0/λ+A(G1/λ(cos(ψ)sin(φ)+cos(θ)cos(φ)sin(104 ))+sin(θ)cos(φ))
  • r3=(F2−F0)z0/λ+B sin(θ)(F2/λ sin(ψ)−cos(ψ))
  • r4=(G2−G0)z0/λ+B(G2/λ sin(θ)sin(ψ)−cos(θ))
  • After convergence the remaining parameters x[0202] 0 and y0 are defined from the equations:
  • x0=F0 z0/λ
  • Y0=G0 z0/λ
  • The transition of pronounced colors can yield considerably more information than a black white transition, and is useful for the purpose of accurately calculating position and orientation of an object. As color cameras and high capacity processors become inexpensive, the added information provided can be accessed at virtually no added cost. And very importantly, in many cases color transitions are more pleasing to look at for the user than stark black and white. In addition the color can be varied within the target to create additional opportunities for statistically enhancing the resolution with which the target can be found. [0203]
  • Problems in 3Dimensional Input to Computers [0204]
  • Today, input to a computer for Three Dimensional (3D) information is often painstakingly done with a 2 Dimensional device such as a mouse or similar device. This artifice, both for the human, and for the program and its interaction with the human is un-natural, and CAD designers working with 3D design systems require many years of experience to master the skills needed for efficient design using same. [0205]
  • A similar situation exists with the very popular computer video games, which are becoming ever more 3 Dimensional in content and graphic imagery, but with similar limitations. These games too heretofore have not been natural for the player(s). [0206]
  • “Virtual reality” too requires 3D inputs for head tracking, movement of body parts and the like. This has lead to the development of a further area of sensor capability which has resulted in some solutions which are either cumbersome for the user, expensive, or both. [0207]
  • The limits of computer input in 3D have also restricted the use of natural type situations for teaching, simulation in medicine, and the like. It further limits young children, older citizens, and disabled persons from benefiting from computer aided living and work. [0208]
  • Another aspect is digitization of object shapes. There are times that one would like to take a plastic model or a real world part as a starting point for a 3D design. Prior art devices that capture 3D shapes are however, expensive and cumbersome and cannot, like the invention, share their function for replacement of the mouse or 2D graphic tablet. [0209]
  • We propose one single inexpensive device that can give all of this control and also act as a drawing pad, or input a 3D sculptured forms or even allow the user to use real clay that as she sculptures it the computer records the new shape. [0210]
  • The invention as here disclosed relates physical activities and physical objects directly to computer instructions. A novice user can design a house with a collection of targeted model or “toy” doors, windows, walls etc. By touching the appropriate toy component and then moving and rotating the user's hand she can place the component at the appropriate position. The user can either get his or her visual cue by looking at the position of the toy on the desk or by watching the corresponding scaled view on the computer display. Many other embodiments are also possible. [0211]
  • FIG. 2[0212] a
  • This figure illustrates an embodiment wherein the invention is used to “work” on an object, as opposed to pointing or otherwise indicating commands or actions. It is a computer aided design system (CAD) embodiment according to the invention which illustrates several basic principles of optically aided computer inputs using single or dual/multi-camera (Stereo) photogrammetry. Illustrated are new forms of inputs to effect both the design and simulated assembly of objects. [0213]
  • 3D Computer Aided Design (CAD) was one of the first areas to bump up against the need for new 3D input and control capability. A mouse or in the alternative, as 2D graphic tablet, together with software that displays several different views of the design are the current standard method. The drawback is that you are forced to move along 2D planes defined by display views or what are known as construction views of the design object. [0214]
  • This situation is especially frustrating when you start creating a design from scratch. The more sculptured the design, the more difficult this becomes. The current CAD experience feels more like an astronaut in a space suit with bulky fingertips and limited visibility trying to do delicate surgery. [0215]
  • A large number of specialized input devices have been designed to handle some of these problems but have had limited success. Just remember your own frustrations with the standard mouse. Imagine attempting to precisely and rapidly define and control complex 3D shapes all day, every day. This limits the usefulness of such design tools to only a relatively rare group, and not the population as a whole. [0216]
  • Ideally we want to return to the world we experience everyday where we simply reach our hand to select what we want to work with, turn it to examine it more closely, move and rotate it to a proper position to attach it to another object, find the right location and orientation to apply a bend of the proper amount and orientation to allow it to fit around another design object, capture 3D real work models, or stretch and sculpture designs. [0217]
  • One of the most wonderful properties of this invention is that it gives the user the ability to control not only 3D location with the motion of his hand but he also has 4 other pieces of data (3 orientation angles and time) that can be applied to control parameters. For example if we wanted to blend 2 designs (say a Ferrari and a Corvette) to create a new design, this process could be controlled simply by [0218]
  • 1) moving the users hand from left to right to define the location of the cross section to be blended, [0219]
  • 2) tilt the hand forward to defined the percentage “P” used to blend the 2 cross sections, and [0220]
  • 3) hit the letter R on the keyboard to record [0221] items 1 and 2. From the each of the 2 cross sectional curves define a set of (x, y) coordinates and create a blended cross sectional coordinate set as follows:
  • X (blend)=P*X (Ferrari)+(1−P)*X (Corvette)
  • Y (blend)+P*Y (Ferrari)+(1−P)*Y (Corvette)
  • Note here and elsewhere, keystrokes can be replace if desired by voice commands, assuming suitable voice recognition capablity in the computer [0222]
  • In the apparatus of FIG. 1, we desire to use a touching and indicating [0223] device 216 with action tip 217 and multidegree of freedom enabling target 215 that the user holds in her hand. Single targets, or multiple targets can be used with a camera system such as 206 so as to provide up to 6 axis information of pointing device position and orientation vis a vis the camera reference frame, and by matrix transform, to any other coordinate system such as that of a TV display, 220
  • In using the invention in the form, a user can send an interrupt signal from an “interrupt member” (such as pressing a keyboard key) to capture a single target location and orientation or a stream of target locations (ended with another interrupt). A computer program in computer determines the location and orientation of the target. The location and orientation of the “action tip”: [0224] 217 of the pointing device can be computed with simple offset calculations from the location and orientation of the target or target set.
  • The set of [0225] tip 217 locations defines the 3D shape of the real world object 205. Different targeted tools with long or curved extensions to their action tips can be used to reach around the real world object while maintaining an attached target in the target volume so the cameras can record its location/orientation.
  • By lifting the tip of the pointing device off the surface of the object, the user can send location and orientation information to operate a computer program that will deform or modify the shape of the computer model displayed. Note that the user can deform a computer model even if there is no real world object under the tip. The tip location and orientation can always be passed to the computer program that is deforming the computer model. [0226]
  • The same device can be used to replace graphic tablets, mice, or white boards, or to be used in conjunction with a display screen, turning into a form of touch screen (as previously, and further discussed herein). In one mode Interrupt members can be activated (i.e. a button or keyboard key etc. can be pressed) like mouse buttons. These together with the target ID can initiate a computer program to act like a pen or an eraser or a specific paintbrush or spray can with width or other properties. The other target properties (z, or orientation angles) can be assigned to the computer program's pen, brush or eraser letting the user dynamically change these properties. [0227]
  • Target(s) can be attached to a users hand or painted on her nails using retroreflective nail polish paint for example allowing the user to quickly move their hand from the keyboard to allow camera or cameras and computer like that of FIG. 1 to determine the position and orientation in 2D or 3D of a computer generated object on the display, and to set the view direction or zoom, or input a set of computer parameters or computer instructions. This can all be done with the same device that we described in the above figures [0228]
  • A major advantage is that this is done without having to grab a mouse or other device. Finger tips can be tracked in order to determine a relative movement such as a grasping motion of the fingers, further described in FIG. 6. Similarly the relation of say one finger, to the nail of the other hand can be seen. [0229]
  • Suitable indication can be the nail or natural image of the finger itself if suitable processing time and data processing power is available. However, as pointed our above, results today are expeditiously and economically best achieved by using easily identified, and preferably bright indica such as retroreflective items, brightly colored or patterned items, unusually shaped items or a combination thereof. [0230]
  • One can also modify or virtually modify the thing digitized with the tools disclosed. The computer can both process the optical input and run the computer application software or a group of computers can process the optical data to obtain the location and orientation of the targets over time and pass that information to the application software in a separate computer. [0231]
  • The [0232] object 205 is shown being digitized with the simple pointer 216, though it could be different tools that could be used. For example, additional tools which could be used to identify the location and orientation of a 3D object are: a long stemmed pointer to work behind an object, pointers designed to reach into tight spaces, or around features, pointers to naturally slide over round surfaces, or planar corners. Each time the “activation member” is triggered, the camera system can capture the location and orientation of the target as well as its ID (alternatively one could enter the ID conventionally via a keyboard, voice or whatever. The ID is used to lookup in the associated database the location of the “work tip”. The 3D coordinates can then be passed to the application software to later build the 3D data necessary to create a computer model of the object. When working on the back of the object furthest from the cameras, the object may obscure the camera view of the target on the simple tool. Thus the user may switch to the long stem tool or the curved stem tool that are used to get around the blocking geometry of the object. Other pointers can be used to reach into long crevices.
  • Let's examine the term “activation member”. This can be any signal to the computer system that it should initiate a new operation such as collect one or more data points, or store the information, or lookup information in the associated databases, etc. Examples of the activation member are a button or foot pedal electronically linked to the computer, a computer keyboard whose key is depressed, or a trigger turning on a light or set of lights on a target, or a sound or voice activation. [0233]
  • Another method of acquiring a 3D shape is to slide a targeted tool over the object acquiring a continuous stream of 3D coordinates that can be treated as a 3D curve. These curves can later be processed to define the best 3D model to fit these curves. Each curve can be identified as either being an edge curve or a curve on the general body surface by hitting the previously defined keyboard key or other activation member. This method is extremely powerful for capturing clay modeling as the artist is performing his art. In other words, each sweep of his fingers can be followed by recording the path of a target attached to his fingers. The target ID is used to lookup in the associated database the artists finger width and the typical deformation that his fingers experience on a sweep. He can change targets as the artwork nears completion to compensate for a lighter touch with less deformation. [0234]
  • FIG. 2[0235] b
  • FIG. 2[0236] b illustrates how targeted tools can be used in a CAD system or other computer program. A targeted work tool can be a toy model of the real world tool 280 (a toy drill for example) or the tool itself 281 (a small paint brush) helping the user immediately visualize the properties of the tool in the computer program. Note that any targeted tool can be “aliased” by another tool. For instance, the tip of the brush could be redefined inside the computer program to act like the tip of a drill. The location and orientation of the drill tip as well as the drill parameters such as its width can be derived from the target and together with its path and interrupt member information. The user can operate his CAD system as though he were operating a set of workshop or artist tools rather than traversing a set of menus.
  • The work tool and an object to be worked on can be targeted, and sensed either simultaneously or one after the other. Their relative locations and orientations can be derived allowing the user, for example, to “whittle” her computer model of the [0237] object 285 that she has in one hand with the tool 286 that is in the other hand.
  • Also a set of objects that are part of a house design process such as a door, a window, a bolt or a hinge could be defined quickly without having the user traverse a set of menus. [0238]
  • This device can perform an extremely broad range of input tasks for manipulation of 2D or 3D applications. [0239]
  • The devices that are used today for such activity are typically a mouse or a graphic tablet. Both of these devices really tend to work only in two dimensions. Everyone has had the experience with the mouse where it slips or skips over the mouse pad making it difficult to accurately position the cursor. The graphic tablet is somewhat easier to manipulate but it is bulky, covering up the desktop surface. [0240]
  • The disclosed invention can replace either of these devices. It never gets stuck since it moves in air. We can attach a target to the top of one of our hands or paint our fingernails and have them act as a target. Alternatively, for example we can pickup a pointing device such as a pencil with a target attached to the top of it. By merely moving our hand from side to side in front of the camera system we can emulate a mouse. As we move our hand forward and backward a software driver in our invention would emulate a mouse moving forward or backward, making input using known interface protocol straightforward. As we move our hand up and down off the table (something that neither the graphic tablet nor the mouse can do) our software driver can recognize a fully three-dimensional movement. [0241]
  • Much of the difficulty with computer-aided design software comes from ones inability heretofore to move naturally around our computer object. We see a three-dimensional design projected onto the two-dimensional computer display and we attempt to move around our three-dimensional design using two-dimensional input devices such as a mouse or computer graphic tablet. Design would be so much easier if we could simply move our hand in a three-dimensional region to both rotate and locate design information. [0242]
  • One Example of a Design Session using this Invention [0243]
  • To more concretely describe this invention we will discuss one of many possible implementations: [0244]
  • painted fingernails on ones hand in that will act as the targets [0245]
  • the computer keyboard will indicated which commands I am performing. [0246]
  • Targets can also be attached to objects, tools, and hands. Commands can be entered by voice, buttons, other member manipulations, or even by the path of a target itself. [0247]
  • An example of a sequence of actions is now described. The specific keys picked for this example are not a restriction of this invention. In a further embodiment other means of triggering events are disclosed than key board strokes. [0248]
  • An example of a sequence of actions is now described. The specific keys picked for this example are not a restriction of this invention. In a further embodiment other means of triggering events are disclosed than keyboard strokes. [0249]
  • Example of CAD Usage with Targeted Tools and Objects Together with Voice Recognition Activated Member [0250]
  • 1) Say “start” to begin using the invention. [0251]
  • 2) Say “rotate View” and rotate the targeted hand inside the target volume until the view on the computer display is in the direction that you choose. In the same sense that a small motion of the mouse is scaled up or down to the useful motion in the design software, a small motion or rotation of the targeted hand can be scaled. Consider the target to be composed of three separate retroreflective fingernail targets. By rotating the plane formed by the three fingernails five degrees to the left we could make the display view on the screen rotate by say 45 degrees. We could also use the distance between ones fingers to increase or decrease the sensitivity to the hand rotation. This, if ones three fingers were close together a 5-degree turn of ones hand might correspond to a 5-degree turn on the screen, while if ones fingers were widely spread apart a 5-degree turn might correspond to 90-degree turn on the screen. Say “freeze view” to fix the new view. [0252]
  • 3) Move the hand inside the target volume until a 3D cursor falls on top of at the display of a computer model and then say “select model” [0253]
  • 4) Say “rotate model” and a rotation of the user's hand will cause the selected computer model to be rotated. Say “freeze model” to fix the rotation. [0254]
  • 5) Say “Select grab point” to select a location to move the selected model by. [0255]
  • 6) Say “move model” to move the selected model to a new location. Now the user can move this model in his design merely by moving his hand. When the proper location and orientation are achieved say “freeze model” to fix the object's position. This makes CAD assembly easy. [0256]
  • 7) Say “start curve” and move the targeted hand through target volume in order to define a curve that can be used either as a design edge or as a path for the objects to follow. By moving the fingers apart in the user can control various curve parameters. Say “end curve” to complete the curve definition. [0257]
  • 8) Pick up a model door that is part of a set of design objects each of which has its own unique target and target ID. Move the targeted object in the target volume until the corresponding design object in the software system is oriented and located properly in the design. Then say “add object”. The location and orientation of the model door together with the spoken instruction will instruct the CAD program to create a door in the computer model. Moving the targeted fingers of apart can vary parameters that define the door such as height or width). [0258]
  • 9) Pick up a targeted model window and say “add Object”. The location and orientation of the model window together with the key hit will instruct the CAD program to create a window in the computer model. [0259]
  • 10) Say “define Parameters” to define the type of window and window properties. The 3 location parameters, 3 orientation parameters, and the path motion, can be assigned by the database associated with the object to control and vary parameters that define the window in the computer software. Say “freeze parameters” to fix the definition. [0260]
  • Example: Designing a Car with Targeted Tools and Objects, Together with the Keyboard as the Member Giving Commands [0261]
  • Now we apply this to the design of an automobile. The steps are as follows: [0262]
  • 1. Pick up a model of a Corvette with a target attached to it and place it in the target volume. [0263]
  • 2. Hit the A key (or provide another suitable signal to the computer, keys being representative of one type prevalent today) to the target parameters to define the object's parameters of interest such as model, year, and make. [0264]
  • 3. Pick up a targeted pointer associated with the CAD commands to locating a car part to work on. The use of this specialized pointer target ID together with hitting the L key to define a view of the car where the orientation of the target defines the view orientation and the location of the camera. If the target defines a camera position inside the car the design information behind the camera will not be displayed. The motion of the special printer after the hit could indicate other commands without the use of a keyboard hit. For instance, a forward or backward tilt could increase or decrease the zoom magnification of the display. A large tilt to the left could select the object under the cursor and a large tilt to the right could deselect the object under the cursor. In a CAD system this selection could mean display that part for examination while in an inventory system it could mean display that part for examination while in an inventory system it could mean deliver this part. [0265]
  • 4. Consider that part was hood selected for redesign in a CAD system. The user pick ups a targeted curvy wire. The invention will recognize the target ID as that of a curve line cross section command and when the user hits any key (or gives a voice command or other suitable signal) the location and orientation of the target is determined and the computer program will cause a cross section curve of the hood to be acquired at the corresponding location and orientation. The CAD system will then expect a series of keystrokes and target paths to define a new cross section leading to a modified hood design. [0266]
  • 5. Hit the M key and draw a small curve segment to modify the previously drawn curve. [0267]
  • 6. Hit the M key again to fix the modification [0268]
  • 7. Hit the F key to file down the hood where it seems to be too high. This is accomplished by moving the targeted fingers back and forth below some specified height above a surface (for example one-inch height above the desktop). The lower the fingers and move the target or targeted hand forward or backward. This can be linked to the surface definition in the CAD system causing the surface to be reduced as though a file or sander were being used. The lower the fingers the more material is removed on each pass. Likewise moving the fingers above one inch can be used to add material to the hood. Spreading the targeted fingers can increase the width of the sanding process. [0269]
  • 8. A user can acquire 3D model (plastic, clay, etc.) by hitting the C key and either rub targeted fingers or a hand-held targeted sculpture tool over the model. From the path of the targeted fingers or tool we can compute the surface by applying the offset characteristics of the targeted too. If the 3D object is made of a deformable material such as clay, the CAD system can reflect the effect of the fingers or tool passing over the model on each passes. If we want we can add some clay on top of the model to build up material where we need it. Thus we can tie art forms such as clay modeling directly into CAD or other computer systems. [0270]
  • We can use targeted tools such as drills, knives, trowels, and scalpels to modify the clay model and its thus associated CAD model. The target ID will allow the computer to check the associated database to determine where the tip is relative to the target and define how the path of the target would result in the tool affecting the CAD model. Notice that we can use these tools in the same manner even if there's no clay model or other real world model to work on. Also notice that these tools could be simple targeted sticks but the CAD model would still be affected in the same way. [0271]
  • FIG. 3[0272]
  • FIG. 3 illustrates additional embodiments working virtual objects, and additional alias objects according to the invention. For example a first object can be a pencil, with the Second object a piece of paper. It also illustrates how we can use of computer image determined tool position and orientation(targeted or otherwise) to give the user tactile and visual feedback as to how the motion, location, and orientation of the tool will affect the application computer program. [0273]
  • The user of the computer application program may have several tools that she feels comfortable with on her desk. An artist for instance might have a small paintbrush, a large paintbrush, a pen, an eraser, and a pencil. Each of these would have a unique target attached to it. The artist would then pick up the tool that she would normally use and draw over the surface of a sheet of paper or over the surface of display screen or projection of computer display. The application software would not only trace the path of the tip of the targeted work tool, but also treat the tool as though it were a pen or paintbrush etc. The exact characteristics of the pen would be found in the associated database using the target ID has a lookup key. Extra parameters such as the width of the line, its color, or whether it's a dashed line could be determined by keyboard input or by applying the height, or target orientation parameters. [0274]
  • If the artist did not own a tool that he needed he could “alias” this tool as follows. Suppose that the artist is missing a small paintbrush. He can pick up a pen move it into the target volume and signal the target acquisition software such as typing on the computer's keyboard the letter Q followed by the ID number of the small paintbrush. From this point on the computer will use the database us initiated with the small paintbrush instead of that of the pen. [0275]
  • Specifically we are illustrating several concepts: [0276]
  • 1) This invention gives the user the natural tactile and visual feedback that she is used to and her art. Thus an artist would use targeted versions of the very tools such as [0277] pens 306, paintbrushes 305, and erasers 310 that she uses without a computer.
  • 2) By drawing with a targeted tool ([0278] eg 336, having target 337) on a paper pad (eg. 350 shown in FIG. 3b, with target 342) or canvas, the user again continues to experience the traditional noncomputer art form as a computer interface. (targets in multiple corners of the paper can also be used for added resolution of paper location with respect to the tool) The user would see her art drawn on the paper while creating a computer version with all of the editing and reproduction capabilities implied by computers. The targeted tool's motion relative to the targeted paper is what determines the line in the graphics system. Thus the user could even put the pad in her lap and change her position in a chair and properly input the graphic information as she draws on the paper as long as the targets continue to be in the view of the camera system.
  • 3) By drawing directly on a computer display, such as shown in FIG. 12, or transparent cover over a computer display, the user can make the targeted manipulate the computer display and immediately get feedback on how the graphics are effected. Again the art form will seem to match the traditional non-computer experience. [0279]
  • 4) Parameters such as line width, or line type, etc. can be controlled by the target parameters that are not used to determine the path of the line (usually this would be the target height and orientation). [0280]
  • 5) This invention allows the user to “alias” any object with any other object. [0281]
  • 6) This invention allows users to control computer programs by moving targeted objects around inside the target volume rather than having to learn different menu systems for you each software package. Thus a child could quickly learn how to create 3D CAD designs by moving targeted [0282] toy doors 361, windows 362, drills 360, and pencils. With the use of macros found in most systems today, a user would create a hole in an object the same way on different CAD systems by moving say a tool such as a drill starting at the proper location and orientation and proceed to the proper depth.
  • An example of a Quant that could be used to define command in a CAD or drawing system to create a rectangle might be proceeded as follows: [0283]
  • 1) Hit the Q key on the keyboard to start recording a Quant. [0284]
  • 2) Sweep the target to the right punctuated with a short stationary pause. During the pause analyze the vector direction for the start of the path segment initiated with the Q key and ending with the pause. The first and last point of this segment define a vector direction that is mainly to the right with no significant up/down or in/out component. Identify this a [0285] direction 1.
  • 3) Sweep the target upward punctuated with a short stationary pause. During the pause analyze the vector direction for the start of the path segment initiated with the last pause and ending with the next pause. The first and last point of this segment define a vector direction that is mainly upward with no significant left/right or in/out component. Identify this a [0286] direction 2.
  • 4) Sweep the target to the left punctuated with a short stationary pause. During the pause analyze the vector direction for the start of the path segment initiated with the last pause and ending with the next pause. The first a last point of this segment define a vector direction that is mainly to the left with no significant up/down or in/out component. Identify this a [0287] direction 3.
  • 5) Sweep the target down punctuated with a short stationary pause. During the pause analyze the vector direction for the start of the path segment initiated with the last pause and ending with the next pause. The first and last point of this segment define a vector direction that is mainly down with no significant left/right or in/out component. Identify this a [0288] direction 4.
  • 6) End the Quant acquisition with a key press “a” that gives additional information to identify how the Quant is to be used. [0289]
  • 7) In this example the Quant might be stored as a compact set of 7 numbers and letters (4, 1, 2, 3, 4, a, 27) where 4 is the number of path segments, 1-4 are number that identify path segment directions (i.e. right, up, left, down), “a” is the member interrupt (the key press a), and 27 is the target ID. FIG. 7[0290] a illustrates a flow chart as to how target paths and Quants can be defined.
  • FIG. 4[0291]
  • FIG. 4 illustrates a car driving game embodiment of the invention, which in addition illustrates the use of target-based artifacts and simplified head tracking with viewpoint rotation. The car dash is for example a plastic model purchased or constructed to simulate a real car dash, or can even be a make-believe dash (ie in which the dash is made from for example a board, and the steering wheel from a wheel from a wagon or other toy,—or even a dish), and the car is simulated in its actions via computer imagery and sounds [0292]
  • [0293] Cameras 405 and 406 forming a stereo pair, and light sources as required (not shown) are desirably mounted on rear projection TV 409, and are used together with computer 411 to determine the location and orientation of the head of a child or other game player. The computer, provides from software a a view on the screen of TV 409 (and optionally sound, on speakers 413 and 414) that the player would see as he turns his head—eg right, left, (and optionally, up, down—not so important in a car game driven on horizontal plane, but important in other games which can be played with the same equipment but different programs). This viewpoint rotation is provided using the cameras to determine the orientation of the head from one or more targets 415 attached to the players head or in this case, a hat 416.
  • In addition, there desirably is also target [0294] 420 on the steering wheel which can be seen by stereo pair of cameras 405 and 406. As the wheel is turned, the target moves in a rotary motion which can be transduced accordingly, or as a compound x and y motion by the camera processor system means in computer 411. It is noted that The target 420 can alternately be attached to any object that we chose to act as a steering wheel 421 such as the wheel of a child's play dashboard toy 425.
  • A prefabricated plywood or plastic molded for dash board can be supplied having other controls incorporated, eg [0295] gas pedal 440 hinged at bottom with hinge 441, and preferably providing an elastic tactile feedback, has target 445 viewed by cameras 405 and 406 such that y axis position and/or z axis(range) changes as the player pushes down on the pedal. This change is sensed, and determined by TV based stereo photogrammetry using the cameras and computer, which data is then converted by computer 412 into information which can be used to modify the display or audio signals providing simulations of the cars acceleration or speed depicted with visual and auditory cues.
  • Similarly, a brake pedal or any other control action can be provided, for example moving a dashboard lever such as [0296] 450 sideways (moving in this case a target on its rear facing the camera not shown for clarity, in x axis motion), or turning a dashboard knob such as 455 (rotating a target, not shown, on its rear facing the camera)
  • Alternatively to purchasing or fabricating a realistic dashboard simulation toy, the child can use his imagination with the same game software. Ordinary household objects such as salt shakers with attached targets can serve as the gas pedal, gearshift, or other controls. A dish with a target, for example can created by the invention to represent a steering wheel, without any other equipment used. This makes fun toys and games available at low cost once computers and camera systems become standard due to their applicability to a wide variety of applications, at ever lower hardware cost due to declining chip prices. [0297]
  • One camera system (single or stereo pair or other) can be used to follow all of the targets at once or several camera systems can follow separate targets. [0298]
  • To summarize this figure we have shown the following ideas: [0299]
  • 1) This invention can turn toys or household objects into computer controls or game controls. This is most easily accomplished by attaching one or more special targets to them, though natural features of some objects can be used. [0300]
  • 2) This invention allows us to set up control panels or instrument panels as required without the complex mechanical and electrical connections, and transducers that are typically required. This lowers the cost and complexity dramatically. [0301]
  • 3) The invention allows simplified head tracking with viewpoint rotation. [0302]
  • Some further detail on the embodiment of FIG. 4, wherein a boy is seated in front of a low cost plastic or plywood dashboard to which a targeted steering wheel and gas and brake pedal is attached (also gear shifts, and other accessories as desired). A target on the boys hat is observed, as are the targets on the individual items of the dash, in this case by stereo pair of cameras located atop the TV display screen, which is of large enough size to seem real-for example, the dash board width is preferable. Retro-reflective tape targets of scotch light 7615 material are used, illuminated by light sources in close adjacency to each camera. [0303]
  • Optionally a TV image of the boy's face can also be taken to show him at the wheel, leaning out the window (likely imaginary)etc. [0304]
  • As noted previously, the boy can move his head from left to right and the computer change the display so he sees a different view of his car on the track, and up and down, to move from driver view of the road, to overhead view of the course, say. [0305]
  • Stereo cameras may be advantageously located on a television receiver looking outward at the back of an instrument panel, having targeted levers and switches and steering wheel, etc. whose movement and position is determined along with that of the player, if desired. The panel can be made out of low cost wood or plastic pieces. The player can wear a hat with targets viewed—same field of view as ins. Panel—this allows all data in one view. As he moves his head to lean out the car window so to speak, the image on screen moves view (typically in an exaggerated manner, like a small angular head movement, might rotate the [0306] view 45 degrees in the horizontal or vertical direction on the screen.).
  • This invention allows one to change the game from cars to planes just by changing the low cost plastic or wood molded toy instrument panel with its dummy levers, switches, sliders, wheels, etc. These actuating devices are as noted desirebly for easiest results, targeted for example by high visibility and of accurately determinable position, retroreflector or led targets. The display used can be that of the TV, or separately incorporated (and preferably removable for use in other applications), as with an LCD (liquid crystal display) on the instrument panel. Multi-person play is possible, and can be connected remotely. [0307]
  • Of significance, is that all datum's useable in this toy car driving simulation game, including several different driver body point inputs, head position and orientation, steering wheel position, plus driver gray level image and perhaps other functions as well, can all be observed with the same camera or multi-camera stereo camera set. This is a huge saving in cost of various equipment otherwise used with high priced arcade systems to deliver a fraction of the sensory input capability. The stereo TV image can also TV images which can be displayed in stereo at another site if desired too. [0308]
  • Where only a single camera is used to see a single point, depth information in z (from panel to camera, here on the tv set as shown in FIG. 4) is not generally possible. Thus steering wheel rotation is visible as an xy movement in the image field of the camera, but the gas pedal lever must be for example hinged so as to cause a significant x and/or y change not just a predominantly z change. [0309]
  • A change in x and/or y can be taught to the system to represent the range of gas pedal positions, by first engaging in a teach mode where one can as shown in FIG. 4 input a voice command to say to the system that a given position is gas pedal up, gas pedal down (max throttle) and any position in between. The corresponding image positions of the target on the gas pedal lever member re recorded in a table and looked up (or alternatively converted to an equation) when the game is in actual operation so that the gas pedal input command can be used to cause imagery on the screen (and audio of the engine, say)to give an apparent speedup or slowing down of the vehicle. Similarly the wheel can be turned right to left, with similar results, and the brake pedal lever and any other control desired can also be so engaged. (as noted below, in some cases such control is not just limited to toys and simulations and can also be used for real vehicles) [0310]
  • The position, velocity, and rate of change of targeted member positions can also be determined, to indicate other desirable information to the computer analyzing the tv images. [0311]
  • Where stereo image pairs are used, the largest freedom for action results as z dimension can also be encoded. However many control functions are unidirectional, and thus can be dealt with as noted above using a single camera 2D image analysis. [0312]
  • On a broader scale, this aspect of the invention allows one to create 3D physical manifestations of instruments in a simulation form, much as National Instruments firm has pioneered two dimensional TV screen only displays. In addition such an “instrument panel” can also be used to interact with conventional programs—even word processing, spreadsheets and the like where a lever moved by the user might shift a display window on the screen for example. A selector switch on the panel can shift to different screens altogether, and so forth. [0313]
  • FIG. 4 has also illustrated the use of the invention to create a simple general-purpose visual and tactile interface to computer programs. [0314]
  • FIG. 5[0315]
  • FIG. 5[0316] a illustrates a one-person game where a targeted airplane model 505 can be used to define the course of an airplane in a game. The orientation of the plane, determined from targets 510, 511, and 512 (on the wings and fuselage respectively) by camera(s) 530 is used by program resident in computer 535 to determine its position and orientation, and changes therein due to movement in the game. The model can be purchased pre targeted (where natural features such as colored circles or special retroreflectors might be used for example). The planes position and/ orientation or change therein is used as an input to a visual display on the computer display and audio program to provide realistic feeling of flight—or alternatively to allow the computer to stage a duel, wherein an the opposing fighter is created in the computer and displayed either alone, or along with the fighter represented by the player. It is particularly enhanced when a large screen display is used, for example >42 inches diagonal.
  • A two person version in shown in FIG. 5[0317] b where the two computers can be linked over the internet or via a cable across the room. In the two-person game airplane 510 is targeted 511 and the motion is sent over a communication link 515 to a second computer where another player had her airplane 520 with its target. The two results can be displayed on each computer display allowing the users to interactively modify their position and orientation. An interrupt member can trigger the game to fire a weapon or reconfigure the vehicle. A set of targets 514 can even be attached (eg with velcro, to his hands or wrists, and body or head) to the player 513 allowing her to “become” the airplane as he moves around in the front of the cameras. This is similar to a child today, pretending to be an airplane, with arms outstretched. It is thus a very natural type of play, but with exciting additions of sounds and 3D graphics to correspond to the moves made.
  • For example, [0318]
  • if the childs arms tilt, to simulate a bank of the plane, a plane representation such as an F16 on the screen can also bank. [0319]
  • If the child moves quickly, the sounds of the jet engine can roar [0320]
  • If the child moves his fingers, for example, the guns can fire. [0321]
  • And so forth. In each case a position or movement of the child, is sensed by the camera, compared by the computer program to programmed or taught movement or position, and the result used to activate the desired video and/or audio response—and to transmit to a remote location if desired the positions and movements either raw, or in processed mode (ie a command saying “bank left” could just be transmitted, rather than target positions corresponding thereto). [0322]
  • Also illustrated in FIG. 5[0323] c is a one or multi-person “Big Bird” or other hand puppet game embodiment of the invention played if desired over remote means such as the Internet. It is similar to the stuffed animal application described above, except that the players are not in the same room. And, in the case of the Internet, play is bandwidth limited, at least today.
  • [0324] Child 530 plays with doll or hand puppet 550, for example Sesame Streets' “Big Bird”, can be targeted using targets 535 and 540 on its hands 551 and 552 and curvilinear line type target 553 and 554 outlining its upper and lower lips (beak). Target motion sensed by stereo pair of cameras 540 and 541 is transformed by computer 545 into signals to be sent over the internet 555 or through another communication link to allow a second child 556 to interact, moving his doll 560 with say at least one target 561.
  • In the simplest case, Each user controls one character. The results of both actions can be viewed on each computer display. [0325]
  • It is noted that a simple program change, can convert from an airplane fighter game, to something else—for example pretending to be a model on a runway, (where walking perfectly might be the goal), or dolls that could be moved in a TV screen representation doll house—itself selectable as the White House, Buckingham Palace or what ever. [0326]
  • We have depicted a one or two person airplane game according to the invention, to further include inputs for triggering and scene change via movement sequences or gestures of a player. Further described are other movements such as gripping or touch indicating which can be useful as input to a computer system. [0327]
  • The invention comprehends a full suite of up to 6 degrees of freedom gesture type inputs, both static, dynamic, and sequences of dynamic movements. [0328]
  • FIG. 6[0329]
  • FIG. 6 illustrates other movements such as gripping or touch indicating which can be useful as input to a computer system. Parts of the user, such as the hands can describe motion or position signatures and sequences of considerable utility [0330]
  • Some natural actions of this type (learned in the course of life):Grip, pinch, grasp, stretch, bend, twist, rotate, screw, point, hammer, throw [0331]
  • Some specially learned or created actions of this type: define parameter, (for example, fingers wide apart, or spaced narrow) flipped up targets etc on fingers—rings, simple actuated object with levers to move targets [0332]
  • This really is a method of signaling action to computer using Detected position of one finger, two fingers of one hand, one finger of each hand, two hands, or relative motion/position of any of the above with respect to the human or the computer camera system or the screen (itself generally fixed with respect to the camera system) [0333]
  • These actions can cause objects depicted on a screen to be acted on, by sensing using the invention. For example, consider the [0334] thumb 601 and first finger 602 of lets say the users left hand 605 are near an object such as a 3D graphic rendition of a cow 610 displayed on the screen, 615, in this case hung from a wall, or with an image projected from behind thereon.. As the fingers are converged in a pinching motion depicted as dotted lines 620, the program of computer 630 recognizes this motion of fingernails 635 and 636 seen by cameras 640 and 641 connected to the computer which processes their image, as a pinch/grasp motion and can either cause the image of the cow to be compressed graphically, or if the hand is pulled away with in a certain time, it is a interpreted to be a grasp, and the cow object is moved to a new location on the screen where the user deposits it, for example at position 650 (dotted lines). Or it could be placed “in the trash”
  • A [0335] microphone 655 can be used to input voice commands into the computer 630 which can then using known technology (dragon software, IBM via voice, etc) be used to process the command. A typical command might be grip, move, etc, if these wernt obvious from the detected motion itself.
  • In a similar manner, [0336] speakers 660 controlled by the computer can give back data to the user such as a beep when the object has been grasped. Where possible for natural effect, it is desirable that where sound and action coincide—that is a squishing sound when something is squished, for example.
  • If two hands are used, one can pinch the cow image at each end, and “elongate it” in one direction, or bend it in a curve, both motions of which can be sensed by the invention in 3 dimensions—even though the image itself is actually represented on the screen in two dimensions as a rendered graphic responding to the input desired. (via action of the program). [0337]
  • The Scale of grip of fingers depends on range from screen (and object thereon being gripped) desirably has a variable scale factor dependent on detected range from the sensor (unless one is to always touch the screen or come very near it to make the move) [0338]
  • Pinching or Gripping is very useful in combination with voice for word processing and spreadsheets. One can move blocks of data from one place to another in a document, or from one document to the next. One can very nicely use it for graphics and other construction by gripping objects, and pasting them together, and then rotating them or whatever with the finger motions used sensed by the invention. [0339]
  • Similarly to the pinching or grasping motion just described, some other examples which can also be sensed and acted on with the invention, using either the natural image of the fingers or hands, or of specialized datums thereon, are [0340]
  • Point [0341]
  • Move [0342]
  • Slide [0343]
  • grip [0344]
  • Pull apart, stretch, elongate [0345]
  • Push together, squeeze [0346]
  • Twist, screw, turn [0347]
  • Hammer [0348]
  • Bend [0349]
  • Throw [0350]
  • FIG. 7 (block diagram) [0351]
  • FIG. 7 illustrates the use of this invention to implement an optical based computer input for specifying software program commands, parameters, define new objects or new actions in an application computer program, temporarily redefine some or all of the database associated with the target or call specific computer programs, functions, or subroutines. [0352]
  • A sequence of simple path segments of the targets obtained by this invention separated by “Quant punctuation” together with its interrupt member settings and its target ID can define a unique data set. We refer to this data set as a “Quant” referring to the discrete states (much like quantum states of the atom). The end of each path segment is denoted with a “Quant punctuation” such as radical change in path direction or target orientation or speed or the change in a specific interrupt member or even a combination of the above. The path segments are used to define a reduced or quantized set of target path information. [0353]
  • A Quant has an associated ID (identification number) which can be used as a look-up key in an associated database to find the associated program commands, parameters, objects, actions, etc. as well as the defining characteristics of the Quant. [0354]
  • An example of a Quant that could be used to define command in a CAD or drawing system to create a rectangle might be proceeded as follows: [0355]
  • A. Hit the Q key on the keyboard to start recording a Quant. [0356]
  • B. Sweep the target to the right punctuated with a short stationary pause. During the pause analyze the vector direction for the start of the path segment initiated with the Q key and ending with the pause. The first and last point of this segment define a vector direction that is mainly to the right with no significant up/down or in/out component. Identify this a [0357] direction 1.
  • C. Sweep the target upward punctuated with a short stationary pause. During the pause analyze the vector direction for the start of the path segment initiated with the last pause and ending with the next pause. The first and last point of this segment define a vector direction that is mainly upward with no significant left/right or in/out component. Identify this a [0358] direction 2.
  • D. Sweep the target to the left punctuated with a short stationary pause. During the pause analyze the vector direction for the start of the path segment initiated with the last pause and ending with the next pause. The first a last point of this segment define a vector direction that is mainly to the left with no significant up/down or in/out component. Identify this a [0359] direction 3.
  • E. Sweep the target down punctuated with a short stationary pause. During the pause analyze the vector direction for the start of the path segment initiated with the last pause and ending with the next pause. The first and last point of this segment define a vector direction that is mainly down with no significant left/right or in/out component. Identify this a [0360] direction 4.
  • F. End the Quant acquisition with a key press “a” that gives additional information to identify how the Quant is to be used. [0361]
  • G. In this example the Quant might be stored as a compact set of 7 numbers and letters (4, 1, 2, 3, 4, a, 27) where 4 is the number of path segments, 1-4 are number that identify path segment directions (i.e. right, up, left, down), “a” is the member interrupt (the key press a), and 27 is the target ID. FIG. 7[0362] a illustrates a flow chart as to how target paths and Quants can be defined.
  • H. In another example, the continuous circular sweep rather than punctuated segments might define a circle command in a CAD system. Some Quants might immediately initiate the recording of another Quant that provides the information needed to complete the prior Quant instruction. [0363]
  • I. Specific Quants can identify a bolt and its specific size, and thread parameters together with information as to command a computer controlled screwing device or drilling a hole for this size bolt. Another Quant could identify a hinge and; [0364]
  • J. Define a CAD model with the specific size, and manufacture characteristics defined by Quant. [0365]
  • K. Or assign joint characteristics to a CAD model. [0366]
  • L. Or command a computer controlled device to bend an object at a given location and orientation by a given location and orientation amount. [0367]
  • M. This method can be applied to sculpture where the depth of a planar cut or the whittling of an object can be determined by the characteristics of the targeted object's path (in other words by it's Quant). [0368]
  • FIG. 8[0369]
  • FIG. 8 illustrates the use of this invention for medical applications. A user can apply this invention for teaching medical and dental students, or controlling robotic equipment used for example in medical and dental applications. In addition, it can be used to give physically controlled lookup of databases and help systems. [0370]
  • In FIG. 8[0371] a, somewhat similar to FIG. 1 above, a scalpel has two targets 801, and 802 (in this case triangular targets) allowing a 6 degree of freedom solution of the position and orientation of a scalpel 811 to which it is attached, having a tip 815. Other surgical instruments can also be used, each with their own unique targets and target ID's, if desired, to allow their automatic recognition by the electro-optical sensing system of the invention.
  • The figure shows a medical student's [0372] hand 820 holding a model of a surgical instrument, a scalpel. A model of a body can be used to call up surgical database information in the computer attached to the camera system about the body parts in the vicinity of the body model 825 being touched. If the targeted tool is pressed down compressing the spring 810 and moving the targets 801 and 802 apart, the information displayed can refer to internal body parts. As the user presses down harder on the spring, the greater the targets move apart the lower in the body and this can be used to instruct the database to display the computer that we reach for information. If the user wants to look up information on drugs that are useful for organs in a given region in the body he might use a similar model syringe with a different target having a different ID. In a similar way a medical (or dental) student could be tested on his knowledge of medicine by using the same method to identify and record in the computer location on the body that is the answer to a test question. Similarly the location and orientation of the targeted tool can be used to control the path of a robotic surgery tool.
  • Notice that the tool with a spring gives the user tactile feedback. Another way the user can get tactile feedback is to use this pointer tool on a pre-calibrated material that has the same degree of compression or cutting characteristics as the real body part. [0373]
  • In a preferred embodiment, each surgical device has its own unique target and its own unique target ID. One of the unique features of this invention is that the user can use the fact surgical tool that he uses normally in the application of his art. Thus, a dental student can pick up a standard dental drill and the target can be attached to a dental drill that has the same feel as an ordinary drill. [0374]
  • FIG. 8[0375] b show how several objects can be attached to specialized holders that are then attached to a baseboard to create a single rigid collection whose location and orientation can be preregistered and stored in a computer database such that only a single targeted pointer or tool need be tracked. The baseboard has one or more specialized target attachment locations. We consider two types of baseboard/holder attachments, fixed (such as pegboard/hole) or freeform (using for example magnets or velcro). Charts 8 d and 8 e describe how these might be calibrated.
  • Attachable targets can be used to pre-register the location and orientation of 1 or more objects relative to a camera system and to each other using a baseboard [0376] 839 shown here with square pegs 837 and an attachment fixture 838 that will hold a specialized target such as those shown as 855, 856, 857. A set of objects here shown as a model of a body 840 and a model of a heart 841 with attachment points 842 and 843 that are attached to object holders 845 and 846 at attachment points 847 and 848. The object holders can be of different shapes allowing the user to hold the object at different orientations and positions as desired. Each object holder has an attachment fixture 850 and 851 that will hold a specialized target. The user then picks the appropriate target together with the appropriate fixture on the object holder so that the target is best positioned in front of the camera to capture the location and orientation of the target. Chart 8 d and 8 e describe the calibration process for a fixed and freeform attachment implementation respectively. Once the baseboard and targets have been calibrated, a computer program can identify which object is being operated on and determine how this information will be used. The steps for utilizing this system is described in Chart 8 f.
  • FIG. 8[0377] c illustrates a dentist with a targeted drill and a target attached to a patients teeth can have the computer linked to the camera system perform an emergency pull back of the drill if a patient sneezes.
  • Many other medically related uses may be made of the invention. For example, movement or position of person a person may be sensed, and used to activate music or 3D stimulus. This has suspected therapeutic value when combined with music therapy in the treatment of stroke victims and psychiatric disorders. [0378]
  • Similarly, the output of the sensed condition such as hand or feet position, can be used to control actuators linked to therapeutic computer programs, or simply for use in health club exercise machines. Aids to the disabled are also possible. [0379]
  • FIG. 9[0380]
  • FIG. 9 illustrates a means for aiding the movement of persons hands while using the invention in multiple degree of freedom movement [0381]
  • A joy stick is often used for game control. Shown in FIG. 9[0382] a is a joystick 905 of the invention having and end including a ball, 910, in which the data from datums on the ball position at the end of the stick is taken optically by the video camera 915 in up to 6 axes using a square retroreflective target 920 on the ball. The stick of this embodiment itself, unlike other joysticks is provided not as a transduction device, but to support the user. Alternatively some axes can be transduced, eg. with LVDTS or resolvers, while data in other axes is optically sensed using the invention.
  • When one wishes to assemble objects, one object may be is held in each hand. or one can use two joysticks as above, or one stick aide as shown here, one hand free., for example. [0383]
  • FIG. 9[0384] b shows an alternate to a joystick, using retroreflective material targets attached to fingers 930, 931 and 932 resting on a floating pad 935 resting on a liquid 940 in a container 945. The floating pad gives comfortable support to the hand while freely allowing the targeted hand to move and rotate. We believe that this invention will help reduce the incidence of Carpal Tunnel syndrome
  • FIG. 9[0385] c shows another more natural way to use this invention in a way that would eliminate Carpal Tunnel syndrome. One merely lets the targeted hand 960 hang down in front of a camera system 970, also illustrated in the context of an armrest in FIG. 10.
  • FIG. 10[0386]
  • FIG. 10 illustrates a natural manner of computer interaction for aiding the movement of persons hands while using the invention in multiple degree of freedom movement with ones arms resting on a armrest of a chair, car, or the like [0387]
  • As shown, [0388] user 1005 sitting in chair 1010 has his thumb and two fingers on both hands 1011 and 1012 targeted with ring shaped retroreflector bands 1015-1020 as shown. All of the datums are seen with stereo TV camera pair 1030 and 1031 on top of display 1035 driven by computer 1040 which also processes the tv camera images. Alternatively, one hand can hold an object, and the user can switch objects as desired, in one or both of his hands, to suit the use desired, as has been pointed out elsewhere in this application.
  • We have found that this position is useful for ease of working with computers. In particular when combined with microphone [0389] 1050 to provide voice inputs as well which can be used for word processing and general command augmentation.
  • This type of seated position is highly useful for inputs to computers associated with [0390]
  • CAD stations [0391]
  • Cars [0392]
  • Games [0393]
  • Business Applications [0394]
  • To name a few. Its noted that the armrest itself may contain other transducers to further be used in conjunction with the invention, such as force sensors and the like. [0395]
  • FIG. 11[0396]
  • This figure illustrates an embodiment wherein other variable functions in addition to image data of scene or targets are utilized. As disclosed, such added variables can be via separate transducers interfaced to the computer or desirably provided by the invention in a manner to coexist with the existing TV camera pickups used for position and orientation input. [0397]
  • A particular illustration of a level vial in a camera field of view illustrates as well the establishment of a coordinate system reference for the overall 3-6 degree of freedom coordinate system of camera(s). As shown level vial [0398] 1101 located on the object 1102 is imaged by single camera 1140 along with the object, in this case having a set of 3 retro-reflective targets 1105-1107, and a retro-reflector 1120 behind the level vial to aid in return in light from near co-axial light source 1130 therefrom (and particularly the meniscus 1125) to camera 1140, used both for single camera photogrammetry to determine object position and orientation, but as well to determine the level in one or two planes of the object with respect to earth.
  • It is noted that the level measuring device such as a vial, inclinometer, or other device can also be attached to the camera and with suitable close-up optics incorporated therewith to allow it to be viewed in addition to the scene. In this case the camera pointing direction is known with respect to earth or whatever is used to zero the level information which can be very desirable. [0399]
  • Clearly other variables such as identification, pressure, load, temperature, etc. can also be so acquired by the cameras of the invention along with the image data relating to the scene or position of objects. For example the camera can see a target on a bimorph responsive to temperature, or it could see the natural image of mercury in a manometer. [0400]
  • FIG. 12[0401]
  • This figure illustrates a touch screen constructed according to the invention employing target inputs from fingers or other objects in contact with the screen, either of the conventional CRT variety, or an LCD screen, or a projection screen—or virtual contact of an aerial projection in space. [0402]
  • As shown, a user [0403] 1201 with targeted finger 1203, whose position in 3D space relative to TV screen 1205 (or alternatively absolute position in room space) is observed by camera system 1210 comprising a stereo pair of cameras (and if required light sources) as shown above. When the user places the target 1202 on his finger 1203 in the field of view of the cameras, the finger target is sensed, and as range detected by the system decreases indicating a touch is likely, the sensor system begins reading continuously (alternatively, it could read all the time, but this uses more computer time when not in use). When the sensed finger point reaches a position, such as “P” on the screen, or in a plane or other surface spaced ahead a distance Z from the screen defined as the trigger plane, the system reads the xy location, in the xy plane of the screen, for example.
  • Alternatively a transformation can be done to create artificial planes, curved surfaces or the like used for such triggering as well. [0404]
  • Target datum's on the screen, either retro-reflectors or LED's say at the extremities, or projected on to the screen by electron guns or other light projection devices of the TV system can be used to indicate to, or calibrate the stereo camera system of the invention to the datum points of interest on the screen. [0405]
  • For example calibration datum's [0406] 1221-1224 are shown projected on the screen either in a calibration mode or continuously for use by the stereo camera system which can for example search for their particular color and/or shape. These could be projected for a very short time (eg one 60 hz TV field), and synched to the camera, such that the update in calibration of the camera to the screen might seem invisible to the user.
  • A specially targeted or natural finger can be used with the invention, or an object both natural (eg a pencil point) or targeted (a pencil with a retroreflector near its tip, for example, ) can be used. In general, the natural case is not as able to specifically define a point however, due to machine vision problems in defining its position using limited numbers of pixels often available in low cost cameras. The retro-reflector or LED target example is also much faster, due to light power available to the camera system, and the simplicity of solution of its centroid for example. [0407]
  • This is an important embodiment, as it allows one to draw, finger painting, or otherwise write on screens of any type, including large screen projection TV's—especially rear projection, where the drawing doesn't obscure the video projection. [0408]
  • Even when front projection onto a screen is used, one can still draw, using for example a video blanking to only project the screen image where not obscured if desired. The cameras incidentally for viewing the targeted finger or paintbrush, or whatever is used to make the indication can be located even behind the screen, viewing through the screen at the target (this assumes the screen is sufficiently transparent and non-distorting to allow this to occur). [0409]
  • It is noted that the screen may itself provide tactile feel. For example, one can remove material from a screen on which imagery is projected. This could for example be a clay screen, with a front projection source. The object removing the material could be a targeted finger or other object such as a sculpture tool. As discussed previously, the actual removal of material could be only simulated, given a deformable screen feel, or with no feel at all, if the screen were rigid. [0410]
  • It is also of interest that the object on which the projection is displayed, need not be flat like a screen, but could be curved to better represent o conform to the object shape represented or for other purposes. [0411]
  • The embodiment of the invention of FIG. 12 can be further used for computer aided design particularly with large screens which can give life size images, and for use with life size tools and finger motion. The use of inputs herein described, as with respect to the figure above, is expected to revolutionize computer aided design and related fields in the sense of making computer use far more intuitive and able to be used effectively by populace as a whole. [0412]
  • It is extremely interesting to consider a CAD display in life size or at least large size form. In this case, the user experience is much improved over that today and is quicker to the desired result due to the much more realistic experience. Illustrated this are applications to cars and clothes design. [0413]
  • For example, consider the view from the bottom of an underbody of a car with all its equipment such as cables pipes and other components on a life size [0414] projection TV image 1260, obtainable today at high definition with digital video projectors, especially if one only worked with half the length of the car at once. Using the invention, a designer 1200 can walk up to the screen image (2 dimensionally displayed, or if desired in stereoscopic 3D), and trace, with his finger 1203, the path where the complex contoured exhaust pipe should go, a notorious problem to design.
  • The [0415] computer 1240 taking the data from stereo pair of tv cameras 1210, can cause the TV screen to display the car undercarriage life size, or if desired to some other scale. The designer can look for interferences and other problems as if it were real, and can even take a real physical part if desired, such as a pipe or a muffler, and lay it life size against the screen where it might go, and move the other components around “physically” with his hand, using his hand or finger tracked by the tv camera or cameras of the system as input to the corresponding modification to the computer generated image projected.
  • Multiple screens having different images can be displayed as well by the projector, with the other screens for example showing section cuts of different sections of the vehicle which can further indicate to the designer the situation, viewed from different directions, or at different magnifications, for example. With the same finger, or his other hand the designer can literally “cut” the section himself, with the computer following suit with the projected drawing image, changing the view accordingly. [0416]
  • The invention has the ability to focus ones thoughts to a set of motions—fast, intuitive and able to quickly and physically relate to the object at hand. It is felt by the inventors that this will materially increase productivity of computer use, and dramatically increase the ability of the computer to be used by the very young and old. [0417]
  • As noted above in the car design example, individual engineers using targeted hands and fingers (or natural features such as finger tips) or by use of targeted aides or tools as described, they can move literally the exhaust pipe by grabbing it using the invention on the screen and bending it, i.e. causing a suitable computer software program in real time to modify the exhaust pipe data base to the new positions and display same on the projected display (likely wall size). [0418]
  • If no database existed, a drawing tool can be grabbed, and the engineer can “draw” using his targeted and sensed by the TV camera or other sensor of the invention finger or tool on the screen where he wants the exhaust pipe to go. The computer then creates a logical routing and the necessary dimensions of the pipe, using manufacturing data as need be to insure it could be reliably made in economically manner (if not, an indication could be provided to the engineer, with hints as to what is needed). [0419]
  • One of the very beauties of this is that it is near real, and it is something that a group of more than one person can interact with This gives a whole new meaning to design functions that have historically been solo in front of a “tube”. [0420]
  • For best function the screen should be a high definition TV (HDTV) such that a user looking on side sees good detail and can walk over to another side and also see good detail. [0421]
  • Following FIG. 13, another useful big screen design application in full size is to design a dress on a model. The use of the big screen, allows multiple people to interact easily with the task, and allows a person to grip portion of the prototype dress on the screen, and move it elsewhere (in this case finger tips as targets would be useful). It also allows normal dress tools to be used such as targeted knife or scissors [0422]
  • FIG. 13[0423]
  • Illustrated is clothing design using finger touch and targeted material. The invention is useful in this application both as a multi-degree of freedom input aide to CAD as disclosed elsewhere herein, and for the very real requirement to establish the parameters of a particular subject (a customer, or representative “average” customer, typically) or to finalize a particular style prototype. [0424]
  • A particular example is herein shown with respect to design of women's dresses, lingerie and the like, where the fit around the breasts is particularly difficult to achieve. As shown, the invention can be employed in several ways. [0425]
  • First, the object, in this case a human or manikin, with or without clothes, can be digitized, for the purpose of planning initial cutting or sewing of the material. This is accomplished using the invention using a simple laser pointer. It is believed that some similar ideas have been developed elsewhere, using projection grids, light stripes or the like. However, the digitization of the object can be accomplished at very low cost as described below using the multicamera stereo vision embodiment of the invention. [0426]
  • Secondly, the cloth itself can be targeted, and the multicamera stereo acquired target data before tryout and/or the distorted data (such as position, location or shape) after tryout determined, and modifications made, using this data to assist in modifying the instant material or subsequent material desired. [0427]
  • Third, one can use the ability of the invention to contour and designate action on objects in real time to advantage. For example, consider [0428] fashion model 1301 wearing dress 1302 that let us say doesn't fit quite right in the breast area 1303. To help fix this problem, she (or someone else, alternatively) can, using her targeted finger 1310, rub her finger on the material where she wishes to instruct the computer 1315, connected to stereo camera 1316 (including light sources as required), either of her own shape (which could also have been done without clothes on) relative to the shape of the material on her, or, the shape—or lack of shape—she thinks it should be (the lack of shape, illustrated for example to be solved by eliminating a fold, or crease, or bunching up of the dress material). Data from multiple sequential points can be taken as she rubs her finger over herself, obtaining her finger coordinates via the invention and digitizing the shape in the area in question along the path traveled.
  • Such instruction to the computer can for example be by voice recording (for later analysis, for example) or even instant automatic voice recognition. In addition, or alternatively, it can be via some movement such as a hand movement indication she makes which can carry pre-stored and user programmable or teachable meaning to the computer (described also in FIG. 7 above and elsewhere herein). For example moving her finger [0429] 1310 up and down in the air, may be sensed by the camera and discerned as a signal of letting out material vertically. A horizontal wave, would be to do it horizontally. Alternatively she might hold an object with a target on her other hand, and use it provide a meaning. As further disclosed in FIG. 6, she can make other movements which can be of use as well. By pinching her fingers, which could be targeted for ease of viewing and recognition, she could indicate taking up material (note she can even pinch the material of a prototype dress just as she would in real life).
  • It is noted that the model could alternatively point a laser pointer such as [0430] 1320 with spot 1321 at the point on herself needed, the 3D coordinates of the laser designated being determined by the stereo cameras imaging the laser spot. This too can be with a scanning motion of the laser to obtain multiple points. Other zones than round spots can be projected as well, such as lines formed with a cylinder lens. This allows a sequence of data points to be obtained from a highly curved area without moving the laser, which can cause motion error. Alternatively, she could use a targeted object, such as a sissors or ruler to touch herself with, not just her finger, but this not as physically intuitive as ones own touch.
  • A microphone [0431] 1340 may be used to pick up the models voice instruction for the computer. Since instruction can be made by the actual model trying on the clothes, others need not be present. This saves labor to effect the design or modification input, and perhaps in some cases is less embarrassing. Such devices might then be used in clothing store dressing rooms, to instruct minor modifications to other wise ready to wear clothes desired for purchase.
  • In many applications, a laser pointer can have other uses as well in conjunction with the invention. In another clothes related example, a designer can point at a portion of a model, or clothes on the model and the system can determine where the point falls in space, or relative to other points on the model or clothes on the model (within the ability of the model to hold still). Additionally, or alternatively, the pointer can also be used to indicate to the computer system what area is in need of work, say by voice, or by the simple act of pointing, with the camera system picking up the pointing indication. [0432]
  • It is also noted that the pointer can project a small grid pattern (crossed lines, dot grid, etc.) or a line or a grille (parallel lines) on the object to allow multiple points in a local area of the object o be digitized by the camera system. Such local data, say in a portion of the breast area, is often all that is needed for the designer. This is illustrated by pointer projector [0433] 1350 projecting a dot grid pattern of 5×5 or 25 equally spaced spots 1355 (before distortion in the camera image caused by curvature of the object) on a portion of bra 1360, with the spot images picked up by the stereo cameras over not too curved areas is not too difficult. If the points cannot be machine matched in the two stereo camera images by the computer program, such matching can be done manually from a TV image of the zone. Note that different views can also be taken for example with the model turning slightly which can aid matching of points observed. Or alternatively, added cameras from different directions can be used to acquire points.
  • Note too the unique ability of the system to record in the computer or on a magnetic or other storage medium for example, a normal grayscale photographic image, as well as the triangulated spot image. This of considerable use, both in storing images of the fashion design (or lack thereof as well as matching of stereo pairs and understanding of the fitting problem. [0434]
  • FIG. 14[0435]
  • FIG. 14 illustrates additional applications of alias objects such as those of FIG. 3, for purpose of planning visualization, building toys, and inputs in general. As shown, a user, in this case a child, [0436] 1401, desires to build a building with his blocks, such as 1410-1412 (only a few of his set illustrated for clarity). He begins to place his blocks in front of camera or cameras of the invention such as cameras 1420 and 1421 which obtain stereo pair of images of points on his blocks which may be easily identified such as corners, dot markings, such as those shown, (which might be on all sides of the blocks) etc, and desirably are retro-reflective or otherwise of high contrast. Rectangular colored targets on rectangular blocks is a pleasing combination.
  • As he sequentially places his blocks to build his building, images of a building can be made to appear via software running in [0437] computer 1440, based on inputs from cameras 1420 and 1421 shown here located on either side of TV screen 1430. These images such as 1450, can be in any state of construction, and can be any building, e.g. the Empire State building, or a computer generated model of a building. Or by changing software concerning the relevant images to be called up or generated, he could be building a ship, a rocket, or whatever.
  • Similarly, such an arrangement of plurality of objects can be used for other purposes, such as for physical planning models in 3D as opposed to today's computer generated PERT charts, Gant charts, and organization charts in 2D. Each physical object, such as the blocks above, can be coded with its function, which itself can be programmable or selectable by the user. For example, some blocks can be bigger or of different shape or other characteristic in the computer representation, even if in actuality they are the same or only slightly different for ease of use, or cost reasons, say. The target on the block can optically indicate to the computer what kind of block it is. [0438]
  • Another application would be plant layout, where each individual block object could be a different machine, and could even be changed in software as to which machine was which. is. In addition, some blocks could for example, in the computer represent machine tools, others robots, and so on. [0439]
  • FIG. 15[0440]
  • FIG. 15 illustrates a sword play video game of the invention using one or more life-size projection screens. While large screens aren't needed to use the invention, the physical nature of the invention's input ability lends itself to same. [0441]
  • As shown, player [0442] 1501 holds sword 1502 having 3 targets 1503-1505 whose position in space is imaged by stereo camera photogrammetry system (single or dual camera) 1510, and retroreflective IR illumination source 1511, so that the position and orientation of the sword can be computed by computer 1520 as discussed above. The display, produced by overhead projector 1525 connected to computer 1520 is a life size or near life size HDTV projection TV image 1500 directly in front of the player 1501 and immersing him in the game, more so than in conventional video games, as the image size is what one would expect in real life.
  • Let us now consider further how this invention can be used for gaming. In many games it desired both to change the view of the player with aspect to the room or other location to look for aliens or what have you. This is typical of “kick and punch” type games but many other games are possible as well. Regardless, the viewpoint is easily adapted here by tuning the head and targeting the head has been shown and described above, and in copending applications by Tim Pryor. [0443]
  • This however begs an interesting question as to whether in turning the head, one is actually looking away from the game, if the game is on a small screen. This explains why a larger screen is perhaps desirable. But if one sits in front of a large screen, say 40″ diagonal or more, one may feel that a little joystick or mouse is much too small as the means to engage computer representations of the opponents. However, using this invention one can simply have a targeted finger or an object in one's hand that could be pointed for example. It is far more natural, especially with larger screens—which themselves give more lifelike representations. [0444]
  • The whole game indeed may actually be on a human scale. With very large projection TV displays, the enemies or other interacting forces depicted on the screen can in fact be human size and can move around by virtue of the computer program control of the projection screen just the same as they would have in life. This however makes it important, and is indeed part of the fun of using the invention, to employ human size weapons that one might use including but not limited to one's own personally owned weapons—targeted according to the invention if desired for ease of determining their location. The opponents actions can be modeled in the computer to respond to those of the player detected with the invention. [0445]
  • A two or more player game can also be created where each player is represented by a computer modeled image on the screen, and the two screen representations fight or otherwise interact based on data generated concerning each players positions or objects positions controlled or manuvered by the players. the same stereo camera system can if desired, be used to see both players if in the same room. [0446]
  • For example in the same, or alternatively in another game, the [0447] player 1549 may use a toy pistol 1550 which is also viewed by Stereo camera system, 1510 in a similar manner to effect a “shootout at the OK corral” game of the invention. In this case the players hand 1575 or holster 1520 and pistol 1585 may be targeted with one or more targets as described in other embodiments and viewed by stereo camera (single or dual) system of the invention, as in the sword game above. On the screen in front of the player is a video display of the OK corral, (and/or other imagery related to the game) with “bad guys” such as represented by computer graphics generated image 1535, who may be caused by the computer game software to come in to view or leave the scene, or whatever.
  • To play the game in one embodiment, the player draws his gun when a bad guy draws his and shoots. His pointing (ie shooting)accuracy and timing may be monitored by the target-based system of the invention that can determine the time at which his gun was aimed, and where it was aimed(desirably using at least one or more targets or other features of his gun to determine pointing direction). This is compared in the [0448] computer 1520 with the time taken by the bad guy drawing, to determine who was the winner—if desired, both in terms of time, and accuracy of aiming of the player.
  • An added feature is the ability of a TV camera of the invention to take (using one of the cameras used for datum detection, or a separate camera such as [0449] 1580, a normal 2D color photograph or TV image 1588 of a player or other person 1586, and via computer software, superpose it on or other wise use it to create via computer techniques, the image of one of the bad (or good) guys in the game! This adds a personal touch to the action.
  • Transmission of gaming data, thanks to the transmission properties of fiber cable, ISDN, the Internet or whatever, game opponents, objects and such an be in diverse physical places. On their screen they can see you, on your screen you would see them, with the computer then upon any sort of a hit changing their likeness to be injured or whatever. [0450]
  • FIG. 15 B illustrates on pistol [0451] 1585 a target indicator flag 1584 which is activated to signal the TV camera or cameras 1510 observing the pistol orientation and position. When the trigger is pulled, the flag with the target pops up indicating this event. Alternatively, a LED can be energized to light (run by a battery in the toy) instead of the flag raising. Alternatively, a noise such as a “pop” can be made by the gun, which noise is picked up by a microphone 1521 whose signal is processed using taught sounds and/or signature processing methods known in the art to recognize the sound and used to signal the computer 1520 to cause the projected TV image 1500 to depict desired action imagery.
  • In one embodiment of the Shooting Game, just described, a bad guy, or enemy depicted on the screen can shoot back at the player, and if so, the player needs to duck the bullet. If the player doesn't duck (as sensed by the tv camera computer input device of the invention,) then he is considered hit. The ducking reflex of the player to the gun being visibly and audibly fired on the screen is monitored by the camera that can look at datums on, or the natural features of, the player, in the latter case for example, the center of mass of the head or the whole upper torso moving from side to side to duck the bullet or downward. Alternatively, the computer tv camera combination can simply look at the position, or changes in the position of the target datum's on the player. The center of mass in one embodiment can be determined by simply determining the centroid of pixels representing the head in the gray level tv image of the player. [0452]
  • Its noted that both the sword and the pistol are typically pointed at the screen, and since both objects are extensive in the direction of pointing, the logical camera location is preferably to the side or overhead—rather than on top or side of the screen, say. In addition, line targets aligned with the object axis, such as [0453] 1586 on pistol 1585 are useful for accurately determining with a stereo camera pair the pointing direction of the object.
  • Where required, features or other data of the sword and pistol described, or the user, or other objects used in the game, may be viewed with [0454] different cameras 1590 and 1591 (also processed by computer 1520) in order that at any instant in the game, sufficient data on the sword (or pistol, or whatever) position and/or orientation can be determined regardless of any obscuration of the targets or other effects which would render targets invisible in a particular camera view. Preferably, the computer program controlling the sensors of the game or other activity, chooses the best views, using the targets available.
  • In this case illustrated, it is assumed that target location with respect to the data base of the sword is known, such that a single camera photogrammetry solution as illustrated in FIG. 1[0455] b can be used if desired. Each camera acquires at least 3 point targets (or other targets such as triangles allowing a 3D solution) in its field, and solves for the position and orientation using those three, combined with the object data base. In one control scheme, Camera 1590 is chosen as the master, and only if it cant get an answer is camera 1591 data utilized. If neither can see at least 3 targets, then data from each camera as to target locations is combined to jointly determine the solution (eg 2 targets from each camera).
  • The primary mode of operation of the system could alternatively be to combine data from two cameras at all times. Often the location of choice is to the side or overhead, since most games are played more or less facing the screen with objects that extend in the direction to the screen (and often as result are pointed at the screen). For many sports however, camera location looking outward from the screen is desired due to the fact that datums maybe on the person or an object. In some cases cameras may be required in all 3 locations to assure an adequate feed of position or orientation data to [0456] computer 1520.
  • The invention benefits from having more than 3 targets on an object in a field, to provide a degree of redundancy. In this case, the targets should desirably be individually identifiable either due to their color, shape or other characteristic, or because of their location with respect to easily identifiable features of the sword object. Alternatively, one can use single targets of known shape and size such as triangles which allow one to use all the pixel points along an edge to calculate the line—thus providing redundancy if some of the line is obscured. [0457]
  • Note that one can use the simple tracking capability of the invention to obtain the coordinates of a target on a user in a room with respect to the audio system and, if desired also with respect to other room objects influencing sound reverberation and attenuation. This coordinate can then be used by a control computer not shown for the purpose of controlling a audio system to direct sound from speakers to the user. Control of phase and amplitude of emission of sound energy. [0458]
  • While a single target on a hat can be simply detected ad determined in its 3D location by the two or more camera stereo imaging and analysis system of the invention, natural features of the use could alternatively, or in addition be used, such as determining from the gray level image detected by the tv camera of FIG. 1 say, the users head location. As pointed out elsewhere, the target can be on the body, and the head can be found knowing the target location—to simplify identification of the head in an overall image of a complex room scene, say. [0459]
  • Besides control of audio sound projection, such coordinate data can also be used to control the screen display, to allow stored images to be directed in such a way as to best suit a use in a given part of a room, for example using directional 3D projection techniques. If user head angle as well is determined, then the viewpoint of the display can be further controlled therefrom. [0460]
  • Data Transmission [0461]
  • Programs used with the invention can be downloaded from a variety of sources. For example: [0462]
  • Disc or other storage media packed with a object such as a toy, preferably one with easily discernable target features, sold for use by the invention [0463]
  • From remote sources, say over the internet, for example the web site of a sponsor of a certain activity. For example daily downloads of new car driving games could come from a car company's web site. [0464]
  • A partner in an activity, typically connected by phone modem or internet, could not only exchange game software for example, but the requisite drivers to allow ones local game to be commanded by data from the partners activity over the communication link. [0465]
  • One of the interesting aspects of the invention is to obtain instructions for the computer controlling the game (or other activity being engaged in) using the input of the invention, from remote sources such as over the Internet. For example, let us say that General Motors wanted to sponsor the car game of the day played with a toy car that one might purchase at the local Toys-R-Us store and with its basic dashboard and steering wheel brake panel accelerator, gear lever, etc. All devices that can easily be targeted inputted via the video camera of the invention of FIG. 4. [0466]
  • Today such a game would be simply purchased perhaps along with the dashboard kit and the first initial software on DVD or CD ROM. In fact those mediums could typically hold perhaps ten games and DVD of different types For example, in the GM case, one day it could be a Buick and the next day a Corvette and so on with the TV view part of this screen changing accordingly. [0467]
  • Remote transmission methods of the Internet, ISDN, fiber links dedicated or shared or otherwise are all possible and very appealing using the invention. This is true in many things, but in this case particularly since the actual data gathered could be reduced to small amounts of transmitted data. [0468]
  • The stereo photogrammetric activity at the point of actual determination can be used directly to feed data to the communications media. Orientation and position of objects or multiple points on objects or the like can be transmitted with very little bandwidth, much less difficult than having to transmit the complete image. In fact, one can transmit the image using the same cameras and hen use the computer at the other end to change the image in response to the data transferred, at least over some degree of change. This is particularly true if one transmits a prior set of images that corresponds to different positions. These images can be used at any time in the future to play the game by simply calling them up form the transmitted datum's. [0469]
  • Similar to the playing function of FIGS. 5, 15 etc, there is also a teaching function, as was discussed relative to medical simulations in FIG. 8. The invention is for example, also useful in the teaching of ballet, karate, dance and the like. The positions and orientation of portions of the ballerina or her clothes can be determined busing the invention, and compared to computer modeled activity of famous ballerinas for example. Or in a more simple case, a motion of the student, can be used to call TV images from memory bank which were taken of famous ballerinas doing the same move—r of her instructor. And, given the remote transmission capability, her instructor may be in another country. This allows at least reconstructed motion at the other end using a very small amount of transmitted data, much the same as we would reconstruct the motion of a player in the game. [0470]
  • While this doesn't answer the question of how the instructor in the ballet studio actually holds the student on occasion but it does help the student to get some of the movement correct. It also allows one to overlay visually or mathematically, the movements of the student generated, which have now been digitized in three dimensions, on the digitized three dimensional representation of famous ballerinas making the same basic moves, such as pas-de-chat. This allows a degree of self-teach capability, since clearly one might wish to look at the moves of perhaps three or four noted ballerinas and compare. [0471]
  • The invention thus can use to advantage 3D motion done at very low cost in the home or in a small time ballet studio but nonetheless linked through CD ROM, the Internet or other media to the world's greatest teachers or performers. What holds true for ballet generally would also hold true for any of the sports, artistic or otherwise that are taught in such a manner. These can particularly include figure skating, golf or other sports that have to do with the moves of the person themselves. [0472]
  • One can use the invention to go beyond that, to the moves of the person themselves relative to other persons. This is particularly discussed in the aforementioned co-pending application relative to soccer and hockey, particularly relative to hose sports that have goaltenders against whom one is trying to score a goal. Or conversely, if you're the goaltender, learning defense moves against other teams that are trying to score on you. In each one could have a world famous goalie instructing, just as in the ballet above, or one could have world famous forwards acting against you. [0473]
  • This is a very exciting thing in that you get to play the “best”, using the invention. These can even be using excerpts from famous games like the Stanley Cup, World Cup and so on. Like the other examples above, the use of 3D stereo displays for games, for sports, for ballet or other instruction, is very useful, even if it requires wearing well known stereo visualization aids such as TV frame controlled LCD based or polarized glasses. However a lot of these displays are dramatic even in two dimensions on a large screen. [0474]
  • Let us now consider how the game would work with two players in the same room with play either would be with respect to themselves or with respect to others. [0475]
  • Where there are cases of coordinated movements for the same purpose as in figure skating, ballet and the like, most of such games are one person relative to the other, sensing sword play, pistol duels, karate, and so on. In what mode does this particularly connect with the invention? [0476]
  • In FIG. 5 above we've illustrated the idea of two children playing an airplane game. In this case, they are playing with respect to themselves. But not necessarily directly, but rather indirectly by viewing the results of their actions on the screen, and it is on the screen that the actual event of their interaction takes place. In addition it should be noted that a single player can hold an airplane in each hand and stage the dogfight himself. [0477]
  • In the case shown it was an airplane dogfight, one with respect to the other. Although as discussed, one can using the invention, by simply changing ones command cues, by movements, gestures or another mode desired, change it from an airplane to a ship, or even change it from airplanes to lions and tigers. It is determined in the software and the support structure around the software. [0478]
  • The actual movements of the person or objects are still determined and still come into play. There are differences though of course because in the case of lions and tigers, one might wish to definitely target the mouth so that you could open your jaws and eat the other person or whatever one does. [0479]
  • The targeting of a beak outline was illustrated in the Big Bird Internet puppet example of FIG. 5. Curvilinear or Line targets are particularly useful for some of these as opposed to point targets. Such targets are readily available using retro-reflective beading as is commonly found today on some athletic shoes, shirts, or jackets, for the purpose reflecting at night. [0480]
  • The use of co-located two players, one versus the other, but through the medium of the screen, is somewhat different. But if the screen is large enough it gives the ability to be real. In other words, the player on the screen is so large and so proportional, that it takes over the fact that the player in the room with you is not a real one(s), but rather his representation on the screen. Any sort of game can be done this way where the sensed instruments are pistols, swords and the like. [0481]
  • In many cases the object locations and orientations sensed are simply the objects relative to the camera system. But often times, what is desired is the relative position of either the people or the object as has been discussed in referenced US Patent applications by Tim Pryor. [0482]
  • Now described is a teaching embodiment of the invention also for use remotely over the Internet or otherwise in which ballet instruction is given, or architecture is taught or accomplished.. The teaching session can be stored locally or transmitted over a computer link such as the Internet. Karate or dance for example can be taught over the Internet. Targets if required, can be attached to arms, hands, legs, or other parts of the body. The user's body part paths can be tracked in space in time by one or more camera systems. The video can be analyzed in real-time or can be recorded and later analyzed. [0483]
  • The TV image data can ultimately even be converted to “Quant” data representing sequences of motion detected by the camera system for compact data transmission and storage. In this case, the specific path data could be recognized as a specific karate thrust, say. This motion together with its beginning and end locations and orientation may be adequate for an automatic system. On the other hand, a two-way Internet connection would allow the instructors move to be compared with that of the student. By reducing the data to Quant data the instructors and students size differences could be factored out. [0484]
  • The invention can be used to determine position and orientation of everyday objects for training and other purposes. Consider that position and orientation of a knife and fork in ones hands can be detected and displayed or recorded, if target datum's are visible to the camera system, either natural (e.g. a fork tip end) or artificial, such a retro-reflective dot stuck on. This allows one to teach proper use of these tools, and for that matter any tools, such as wrenches, hammers, etc. indeed any apparatus that can be held in the hands (or otherwise). The position too of the apparatus held with respect to the hands or other portions of the body for other bodies maybe determined as well. [0485]
  • This comes into clear focus relative to the teaching of dentists and physicians, especially surgeons. Scalpels, drills, and the like may all be targeted or other wise provided with natural features such as holes, slots, and edges which can work with the invention. [0486]
  • In the military such training aids are of considerable use, and become as well an aid to inspiring young recruits, for whom the TV display and video game aspect can render perhaps a dull task, fun. The proper ergonomic way to dig a foxhole, hold a rifle, could be taught this way, just as one could instruct an autoworker on an assembly line installing a battery in a car. [0487]
  • FIG. 16[0488]
  • FIG. 16 illustrates an embodiment of the invention suitable for use on airplanes and other tight quarters. A computer having an LCD screen [0489] 1610, which can attached if desired to the back of seat ahead 1605 (or to any other convenient member), has on either side of the screen, near the top, two video cameras 1615 and 1616 of the invention, which view workspace on and above the tray table folding down from the seat ahead. The user communicates with the computer using a microphone (for best reception a headset type not shown, connected to the computer) which converts voice to letters and words using known voice recognition techniques. For movement of words, paragraphs, and portions of documents, including spread sheet cells and the like, the user may use the invention.
  • In the form shown, he can use a variety of objects as has been discussed above. For simplicity, consider battery powered [0490] LED 1620 on his finger, 1625, which emits at a narrow wavelength region which is passed by band pass filters (not shown for clarity) on the front of cameras 1616 and 1615, respectively. Since a full 3 degree of freedom location of the finger LED is possible, movement off the table of the finger (which other wise becomes a sort of mouse pad, or touch pad in 2 Axes) can be used to optionally signal the program to perform other functions. Or if there are 3D graphics to interact with, it can be of great utility for them. Indeed, other fingers, or of the the other hand can also contain LED targets which allow many functions described herein to be performed in up to 6 axes.
  • One can also place a normal keyboard such as [0491] 1650 interfaced to the computer (built into the back of the led display for example) on the tray table (or other surface), and use the led equipped finger(s) to type normally. But a wide variety of added functions can again be performed., by signaling the computer with the LED targets picked up by the video cameras. There can be movement gestures to signal certain windows to open for example. Other functions are
  • 1. Pointing with finger with target and 3 pints on wrist at icon or other detail depicted on screen [0492]
  • 2. Extend values out of chart in 3[0493] rd dimension by pulling with targeted fingers in the manner described in FIG. 6
  • 3. Solid icons can be placed on the tray table and detected, in this case each having a small led or leds and battery. These can be moved on the table to connote meaning to the computer, such as the postion of spread sheet cells or work blocks in pert chart, and the like [0494]
  • 4. Use cameras to detect position of laser spot on an object on the tray illuminated by a laser pointer held in the hand of the user (preferably the laser wavelength and led wavelength would be similar to allow both to pass the bandpass filters.). [0495]
  • 5. Its noted the screen could be larger than otherwise used for laptop computers, since it is all out of the way on the back of the seat (or at a regular desk, can stand up with folding legs for example). The whole computer can be built into the back of the device (and is thus not shown here for clarity). [0496]
  • 6. A storage space for targeted objects used with the invention can be build into the screen/computer combination or carried in a carrying case. Attachments such as targets for attachment to fingers can also be carried. [0497]
  • 7. Its noted that for desk use the invention allows human interaction with much larger screens than would normally be practical. For example if the screen is built into the desktop itself (say tilted at 45% like a drafting board), the user can grab/grip/pinch objects on the screen using the invention, and move them rotate them or other wise modify their shape, location or size for example using natural learned skills. Indeed a file folder can be represented literally as a file folder of normal size, and documents pulled out by grabbing them. This sort of thing works best with high resolution displays capable of the detail required. [0498]
  • FIG. 16 has illustrated an embodiment of the invention having a mouse and/or keyboard of the conventional variety combined with a target of the invention on the user to give an enhanced capability even to a conventional word processing or spreadsheet, or other program. [0499]
  • For example consider someone whose interest is developing a spreadsheet prediction for company profit and loss. Today this is done exclusively using a keyboard to type in data, and a mouse (typically) to direct the computer to different cells, pull down window choices and the like. This job is generally satisfactory, but leads to carpal tunnel syndrome and other health problems and is somewhat slow—requiring typing or mouse movements that can overshoot, stick and the like. [0500]
  • Voice recognition can clearly be used to replace the typing, and gesture sensing according to the invention including specialized gestures or movements such as shown in FIG. 5 can be used to improve recognition of voice inputs by the computer system. [0501]
  • But what else is possible? Clearly one can use the touch screen indicator aspect to point directly at objects on the screen. For example, consider a user such as in FIG. 12 may seated in front of a large high definition display screen on a wall or tilted 45 degrees as at a writing desk. The user can either touch (or near touch) the screen as in FIG. 12 or he can point at the screen with his finger targeted with retro-reflective scotch-lite glass bead target and the pointing direction calculated using the 3 target set on top of his wrist as in FIG. 1[0502] b. The screens' datum's are known, for example four retro-reflective plastic reflector points at the corners 1270-1273 as shown. As elsewhere discussed, projected targets on the screen can also be used to establish screen locations—even individually with respect to certain information blocks if desired. A Stereo camera pair senses the positions of wrist and finger, and directs the computer and TV projector (not shown) to follow the wishes of the user at the point in question. The user may use his other hand or head if suitably targeted or having suitable natural features, to indicate commands to the camera computer system as well.
  • Of interest is that the display can be in 3D using suitable LCD or other glasses to provide the stereo effect. This allows one to pull the values out of the excel chart and make them extendable in another dimension. One can pull them out, so to speak by using for example as shown in FIG. 6, using two targeted fingers (e.g. targeted thumb and targeted finger and grab or pinch and pull the object in the cell. In a word processor the word on the page can be so grabbed. [0503]
  • On can use this effect to work backward form a 3D bar graph created by the spread sheet program i.e. to press on the individual bars until the form of the data shown meets ones goals, by pressing as in a repeated finger motion downward, the program changes the data in certain cell scenarios (e.g. sales, expenses, profits, etc.) [0504]
  • In another example, transparent targeted blocks may be moved over the top of transparent rear projection screen. The blocks can also extend in height above the screen by a variable amount. Data can be inputted by the computer screen, but also by varying the block height. The height is then encoded into the screen projection to change the color or another parameter. [0505]
  • In the factory layout example of FIG. 14 above, if blocks are translucent and placed on a screen, the colors, written description, or pictorial description (e.g. a lathe, or a mill) of screen, with the target data on the block tracked and fed to the TV projection source. Such an arrangement might be useful for other complex tasks, also in real time, as in Air traffic control. [0506]
  • Other target arrangements sufficient to determine pointing direction can also be used. This pointing method can also be used to point at anything—not just screens. It is especially useful with voice commands to tell the pointed item to do something. It is also of use to cue the projection system of the TV image to light up the pointed area or otherwise indicate where pointing is taking place. [0507]
  • For giving presentations to a group, the invention can operate in reverse from a normal presentation computer—that is the person standing giving the presentation can point at the screen where the information is displayed, and what he pointed at, grasped, or what ever recorded by the cameras of the invention into the computer. [0508]
  • It is further noted that a laser pointer can be targeted and used for the purpose. [0509]
  • FIG. 17[0510]
  • This embodiment illustrates the versatility of the invention, for both computer input, and music. As shown in FIG. 17A, a two [0511] camera stereo pair 1701 and 1702 connected to computer 1704 such as mentioned above for use in games, toys and the like can also be used to actually read key locations on keyboards, such as those of a piano or typewriter. As shown, letters or in the piano case, musical note keys such as 1708 with retro target 1720 on their rear, beneath the keyboard, are observed with the camera set 1701. A Z axis movement gives the key hit (and how much, if desired—assuming elastic or other deformation in response to input function by player finger 1710), while the x (and y if a black key, whose target is displaced for example) location of the key tells which letter or note it is. Speakers 1703 and 1705 provide the music from a MIDI computer digital to speaker audio translation.
  • For highest speed and resolution, useful with long keyboards, and where the objects to be observed are in a row (in this case the keys), the two cameras are in this instance composed of [0512] 2048 element Reticon line arrays operating at 10,000 readings per second. Specialized DSP processors to determine the stereo match and coordinates may be required at these speeds, since many keys can be pressed at once.
  • Alternatively, the piano players finger tips as disclosed in previous embodiments can be imaged from above the keyboard (preferably with retroreflective targets for highest speed and resolution) to create knowledge of his finger positions. This when coupled with knowledge of the keyboard data base allows one to determine what key is being struck due to the z axis motion of the finger. [0513]
  • FIG. 18[0514]
  • Virtual musical instruments are another music creation embodiment of the invention. A dummy violin surrogate such as [0515] 1820 in FIG. 18 can be provided which is played on bowstrings real or dummies by a bow 1825 also real or dummies The position of the bow, vis a vis the dummy violin body 1820 proper, and the position of the fingers 1840 (which may be targeted) gives the answer as to what music to synthesize from the computer. It is envisioned that the easiest way to operate is to use retro-reflecting datums such as dot or line targets on all of the bow, violin, and fingers, such as 1830, 1831, 1832, and 1833, viewed with stereo camera system 1850 connected to computer 1858 and one or more loudspeakers 1875.
  • Frequency response is generally enough at 30 frames per second typical of standard television cameras to register the information desired, and interpolation can be used if necessary between registered positions (of say the bow). This may not be enough to provide full timber of the instrument however. One can use faster cameras such as the line arrays mentioned above (if usable), PSD cameras as in FIG. 22 and/or techniques below to provide a more desirable output. [0516]
  • The input from the targeted human, or musical instrument part (eg key or bow or drumstick) may cause via the computer the output be more than a note, for example a synthesized sequence of notes or chords—in this manner one would play the instrument only in a simulated sense—with the computer synthesized music filling in the blanks so to speak. [0517]
  • Similarly a display such as [0518] 1860 may be provided of the player playing the simulated instrument, may use the data of positions of his hands in a few positions, and interpret between them, or call from memory more elaborate moves either taught or from a library of moves, so that the display looks realistic for the music played (which may be also synthesized) as noted above.
  • The display fill in is especially easy if a computer model of the player is used, which can be varied with the position data determined with the invention. [0519]
  • FIG. 19[0520]
  • FIG. 19 illustrates a method for entering data into a CAD system used to sculpt a car body surface, in which a physical toy car surrogate for a real car model, [0521] 1910, representing for example the car to be designed or sculpted, is held in a designers left hand 1902, and sculpting tool 1905 in his right hand 1906. Both car and tool are sensed in up to 6 degrees of freedom each by the stereo camera system of the invention, represented by 1912 and 1913,(connected to a computer not shown used to process the camera data, enter data into the design program, and drive the display 1915). The objects are equipped with special target datums in this example, such ass 1920-1922 on car 1910, and 1925-1927 on sculpting tool 1905. A display of a car to be designed on the screen is modified by the action of the computer program responding to positions detected by the camera system of the sculpting tool 1905 with respect to the toy car, as the tool is rubbed over the surface of the toy car surrogate.
  • One can work the virtual model in the computer with tools of different shapes. Illustrated are two [0522] tools 1930 and 1931, in holder 1940 of a likely plurality, either of which can be picked up by the designer to use. Each has a distinctive shape by which to work the object, and the shape is known to the design system. The location of the shaped portion is also known with respect to the target datum's on the tools such as 1950-1952. As the tool is moved in space, the shape that it would remove (or alternatively add, if a build up mode is desired) is removed from the car design in the computer. The depth of cut can be adjusted by signaling the computer the amount desired on each pass. The tool can be used in a mode to take nothing off the toy, or if the toy was of clay or coated in some way, it could actually remove material to give an even more lifelike feel.
  • 3 targets are shown, representatively on [0523] tool 1930, with three more optionally on the other side for use if the tool becomes rotated with respect to the cameras. Each tool has a code such as 1960 and 1961 that also indicates what tool it is, and allows the computer to call up from memory, the material modification effected by the tool. This code can be in addition to the target datum's, or one or more of the datum's can include the code.
  • FIG. 20[0524]
  • FIG. 20 illustrates an embodiment of the invention used for patient monitoring in the home, or hospital. A group of retro-reflective targets such as [0525] 2021, 2030, and 2040 are placed on the body of the person 2045 and are located in space relative to the camera system, (and if desired relative to the bed 2035 which also may include target 2036 to aid its location), and dynamically monitored and tracked by stereo camera system 2020 composed of a pair of VLSI Vision 1000×1000 CMOS detector arrays and suitable lenses.
  • For example, [0526] target 2021 on chest cavity 2022 indicates whether the patient is breathing, as it goes up and down. This can be seen by comparison of target location in sequential images, or even just target blur (in the direction of chest expansion) if the camera is set to integrate over a few seconds of patient activity.
  • [0527] Target 2030 on the arm, as one example of what might be many, is monitored to indicate whether the patient is outside a perimeter desired, such as the bed 2035. If so, computer, 2080 is programmed to sound an alarm 2015 or provide another function, for example alerting a remote caregiver who can come in to assist. Microphone, such as 2016 may also be interfaced to the computer to provide a listening function, and to signal when help his needed.
  • Also illustrated is an additional target or targets another portions of the chest or body, such as [0528] 2040, so that if the patient while asleep or otherwise covers one with his arm, the other can be sensed to determine the same information.
  • Also disclosed, is like figure above, the conversion of a variable of the patient, in this case blood pressure, into a target position that can be monitored as well. Pressure in [0529] manometer 2050 causes a targeted indicator 2060 (monitored by an additional camera 2070 shown mounted to the end of the bed and achieving higher resolution if desired) to rise and fall, which indicates pulse as well.
  • While described here for patients, the same holds true for babies in cribs, and the prevention of sudden infant death syndrome (SIDS), by monitoring rise and fall of their chest during sleep, and to assure they are not climbing out of the crib or the like. [0530]
  • FIG. 21[0531]
  • Following from the above, a simple embodiment of the invention may be used to monitor and amuse toddlers and preschool age children. For example in the FIG. 1 embodiment a Compaq 166 [0532] Mhz pentium computer 8, with Compaq 2D color TV camera 10, was used, together with an Intel frame grabber and processor card to grab and store the images for processing in the Pentium computer. This could see small retro targets on a doll or toddlers hands, with suitable LED lighting near the camera axis. The toddler is seated in a high chair or walking around at a distance for example of several feet from the camera mounted on top of the TV monitor. As the toddler moves his hands, or moves the dolls hands, alternatively) an object such as a doll image or a the modeled computer graphics image of clown, let us say could move up and down or side to side on the screen. (in the simple version of FIG. 1, only x and y motions of the toddler body parts or doll features are obtainable.) For comfort and effect, the image of the clown can also be taken or imported from other sources, for example a picture of the child's father.
  • As the child gets older, single or dual camera stereo of the invention can be used to increase the complexity with which the child can interact to 3, 4, 5, or 6 degrees of freedom with increasing sophistication in the game or learning experience. [0533]
  • Other applications of the invention are also possible. For example the toddler can be “watched” by the same TV camera periodically on alternate tv frames, with the image transmitted elsewhere so his mother knows what he is doing. [0534]
  • His movements indicate as well what he is doing and can be used as another monitoring means. For example, if he is running or moving at too great a velocity, the computer can determine this by a rate of change of position of coordinates, or by observing certain sequences of motion indicative of the motion desired to monitor. Similarly, and like the patient example above, if the coordinates monitored exceed a preset allowable area (eg a play space), a signal can be indicated by the computer. [0535]
  • The device also useful for amusement and learning purposes. The toddler's wrists or other features can be targeted, and when he claps, a clapping sound generated by the computer in proportion, or by different characteristics or the like. The computers can be programmed using known algorithms and hardware talk to him, and tell him to do things, and monitor what he did, making a game out of it if desired. It also can aid learning, giving him visual feedback and audio and verbal appreciation of a good answer, score and the like. [0536]
  • Similarly, we believe the invention can be used to aid learning and mental development in very young children and infants by relating gestures of hands and other bodily portions or objects such as rattles held by the child, to music and/or visual experiences. [0537]
  • Let us consider the apparatus and method of FIG. 21 where we seek to achieve the advantageous play and viewing activity, but also to improve the learning of young children through the use of games, musical training and visual training provided by the invention—in the case shown here starting with children in their crib where they move from the rattle to mobile to busy box (e standing in crib) stage, the invention providing enhanced versions thereof and new toys made possible through LCD display attached to the crib and the like. The second issue is what sorts of new types of learning experiences can be generated that combine music, graphics and other things. [0538]
  • Consider FIG. 21, wherein an [0539] LCD tv display 2101 is attached to the end of crib 2102, in which baby 2105 is laying, placed so baby can see it. This display could be used to display for example a picture of the child's parents or pets in the home, or other desired imagery which can respond both visually and audibly to inputs from the baby sensed with the apparatus of FIG. 1, or other apparatus of the invention. These are then used to help illustrate the learning functions. The camera system, such as stereo pair, 2110 and 2115 are located as shown on the edges of the LCD screen or elsewhere as desired, and both are operated by the computer 2135. Notice that the design with the cameras integrated can be that of the lap top FIG. 22 application as well
  • The baby's hands, fingers, head, feet or any other desired portion can be targeted, on his clothes or directly attached. Or natural features can be used if only simple actions such as moving a hand or head are needed (all possible today with low cost computer equipment suitable for the home). And importantly, the baby can easily hold a targeted rattle such as as [0540] 2130 having target datums 2152 and 2153 at the ends (whose sound may be generated from the computer speaker 2140 instead, and be programmably changed from time to time, or react to his input) and he may easily touch as today a targeted mobile in the crib as well, or any other object such as a stuffed animal, block or what ever.
  • In essence, the invention has allowed the baby to interact with the computer for the first time in a meaningful way that will improve his learning ability, and IQ in future years. It is felt by the inventors that this is a major advance. [0541]
  • Some learning enhancements made possible are: [0542]
  • A computer recorded voice (with associated TV image if desired) of the child's parents or siblings for example, calling the child's name, or saying their names. Is responded to by the baby, and voice recognition picks up the child's response and uses it to cue some sort of activity. This may not even be voice as we know it but the sounds made by a child even in the early stages before it learns to talk. And it may stimulate him to talk, given the right software [0543]
  • The child can also move his hands or head and similar things can take place. For example, he can create music, or react to classical music (a known learning improvement medium today) perhaps by keeping time, or to cue various visual cues such as artistic scenes or family and home scenes that he can relate to certain musical scores and the like. [0544]
  • The child can also use the computer to create art, by moving his hand, or the rattle or other object, and with some simple program, may be able to call up stored images as well. [0545]
  • Another embodiment could have the child responding to stored images or sounds, for example from a DVD Disc read by the [0546] computer 2135, and sort of vote on the ones he liked, by responding with movement over a certain threshold level, say a wiggle of his rattle. These images could later be played back in more detail if desired. And his inputs could be monitored and used by professional diagnosis to determine further programs to help the child, or to diagnose if certain normal patterns were missing—thus perhaps identifying problems in children at a very early age to allow treatment to begin sooner, or before it was too late.
  • The degree of baby excitement (amplitude and rate, etc. of rattle, wiggle, head arm movement) [0547]
  • Note that in an ultimate version, data directly taken from the child, as in FIG. 16 example, can be transmitted to a central learning center for assistance, diagnosis, or directly for interactivty of any desired type. [0548]
  • Therapy and Geriatrics [0549]
  • It is noted that an added benefit of the invention is that it can be used to aid mute and deaf persons who must speak with their hands. the interpretation of sign language can be done by analyzing dynamic hand and finger position and converting via a learning sequence or other wise into computer verbage or speech [0550]
  • It is also noted that the invention aids therapy in general, by relating motion of a portion of the body to a desired stimulus. (visual auditory or physical touch) Indeed the same holds for exercise regimes of healthy persons. [0551]
  • And such activity made possible by the invention is useful for the elderly who may be confined to wheelchairs, unable to move certain parts of the body or the like. It allows them to use their brain to its fullest, by commuincating with the computer in a different way. [0552]
  • Alternatively, stroke victims and other patients may need the action of the computer imagery and audio in order to trigger responses in their activity to re train them—much like the child example above. [0553]
  • An interesting example too are elderly people who have played musical instruments but can no longer play due to physical limitations. The invention allows them to create music, by using some other part of their body, and by using if needed, a computer generated synthesis of chords, added notes or what ever, to make up for their inability to quickly make the movements required. [0554]
  • Other Applications of the Invention [0555]
  • One of the advantages of this invention is that all sorts of objects can be registered in their function on the same camera system, operating both in single, dual or other stereo capabilities and all at low cost. This particular issue that the people, the objects, the whole stationary platform such as desk, floors, walls, al can be registered with the same generic principles, is a huge benefit of the application. [0556]
  • This means that the cost of writing the operating control software suitable for a large number and variety of applications only has to be done once. And similarly the way in which it operates, the way in which the people interact with it, only has to be learned once. Once one is familiar with one, one is almost familiar with all., and none need cost more than a few dollars or tens of dollars by itself in added cost. [0557]
  • The standard application aspect of the invention is important too from the point of view of sharing cost of development of hardware, software, target, material etc over the largest possible base of applications, such that production economies are maximized [0558]
  • This is relatively the same as the situation today, where one uses a mouse all the time, for every conceivable purpose. But the mouse itself is not a natural object. One has to learn its function, and particular to each program, one may have to learn a different function. Whereas in the invention herein described, it is felt by the inventors that all functions are more or less intuitive and natural; the teaching, the games, the positioning of objects on a CAD screen. All these are just the way one would do it in normal life. It is possible to see this when one talks and how one uses one's hands to illustrate points or to hold objects in position or whatever. Whatever you do with your hands, you can do with this invention. [0559]
  • Speech Recognition [0560]
  • One application of this actually to aid in speech recognition. For example, in Italy in particular, people speak with their hands. They don't speak only with their hands, but they certainly use hand signals and other gestures to illustrate their points. This is not of course just true in Italian language, but the latter is certainly famous for it. [0561]
  • This invention allows one to directly sense these positions and movements at low cost. What this may allow one to do then is utilize the knowledge of such gestures to act as an aid to speech recognition. This is particularly useful since many idiomatic forms of speech are not able to be easily recognized but the gestures around them may yield clues to their vocal solution. [0562]
  • For example, it is comprehended by the invention to encode the movements of a gesture and compare that with either a well known library of hand and other gestures taken from the populace as a whole or taught using the gestures of the person in question. The person would make the gesture in front of the camera, the movements and/or positions would be recorded, and he would record in memory, using voice or keyboard or both, what the gesture meant—which could be used in future gesture recognition, or voice recognition with accompagnied gesture. A look up table can be provided in the computer software, where one can look up in a matrix of gestures, including the confidence level therein, including the meaning, and then compare that to add to any sort of spoken word meaning that needs to be addressed. [0563]
  • Artifacts [0564]
  • One of the advantages of the invention is that there is a vast number of artifacts that can be used to aid the invention to reliably and rapidly acquire and determine the coordinates of the object datums at little or no additional cost relative to the camera/computer system. For example we discussed retro-reflective targets on fingers, belt buckles, and many forms of jewelry, clothing and accessories (eg buttons) and the like. Many of these are decorative and objects such as this can easily be designed and constructed so that the target points represented are easily visible by a TV camera, while at the same time being interpreted by human as being a normal part of the object and therefore unobtrusive. (see for example referenced tim pryor copending applications) Some targets indeed can be invisible and viewed with lighting that is specially provided such as ultraviolet or infrared. [0565]
  • Surrogates [0566]
  • An object, via the medium of software plus display screen and/or sound may also take on a life as a surrogate for something else. For example, a simple toy car can be held in the hand to represent a car being designed on the screen. Or the toy car could have been a rectangular block of wood. Either would feel more or less like the car on the screen would have felt, had it been the same size at least, but neither is the object being designed in the computer and displayed on the screen. [0567]
  • Surrogates do not necessarily have to “feel right” to be useful, but it is an advantage of the invention for natural application by humans, that the object feel or touch can seem much like the object depicted on the screen display even if it isn't the same. [0568]
  • Anticipatory Moves [0569]
  • The invention can sense dynamically, and the computer connected to the sensor can act on the data intelligently. Thus the sensing of datum's on objects, targeted or not, can be done in a manner that optimizes function of the system. [0570]
  • For example if one senses that an object is rotating, and targets on one side may likely recede from view, then one can access a data base of the object, that indicates what targets are present on another side that can be used instead. [0571]
  • Additional Points [0572]
  • It is noted that in this case, the word target or datum essentially means a feature on the object or person for the purpose of the invention. As has been pointed out in previous applications by Tim Pryor, these can either natural features of the object such as fingernails or fingertips, hands or so on or can be what is often preferable, specialized datums put on especially to assist the function of the invention. These can include typically contrasting type datum's due to high brightness retro-reflection or color variation with respect to its surroundings, and often further distinguished or alternatively distinguished by some sort of pattern or shape. [0573]
  • Examples of patterns can include the patterns on cloth such as stripes, checks, and so on. For example the pointing direction of a person's arm or sleeve having a striped cloth pointing along the length of the sleeve would be indicated by determining the 3D pointing direction of the stripes. This can easily be done using the edge detection algorithms with a binocular stereo cameras here disclosed. [0574]
  • A useful shape can be a square, a triangle, or something not typically seen in the room, desktop, or other area that one would normally operate such that they stand out. Or even if a common shape, the combintion of the shape with a specific color or brightness or both, often allows recognition [0575]
  • It is appreciated that beyond the simple 2 dimensional versions as described such as in FIG. 1, many applications benefit from or either depend on 3D operation. This is disclosed widely within the application as being desireably provided either from a single camera or two or more cameras operating to produce stereo imagery that can be combined to solve for the range distance Z. However, z dimension data can also be generated, generally less preferably, by other means, such as ultrasonics or radar, or laser triangulation if desired to effect the desirable features of many of the applications described. [0576]
  • Another point to stress concerning the invention is the fact of the performance of multiple functions. This allows it to be shared amongst a large number of different users and different uses for the same user and with a commonality as mentioned above of the teaching of it's function, the familiarity with it's use, and so forth. [0577]
  • One example of this is the use of a targeted hand which one moment is for a game, the next moment it's for a CAD input, and the next it's for music and whatever [0578]
  • A key is the natural aspect of the invention, that it enables, at low cost and high reliability the use of learned natural movements of persons—for work, for play, for therapy, for exercise—and a variety of other work and safety uses here disclosed, and similar to those disclosed. [0579]
  • FIGS. [0580] 1 to 3 have illustrated several basic principles of optically aided computer inputs using single or dual/multicamera (stereo) photogrammetry. Illustrated are new forms of inputs to effect both the design and assembly of objects.
  • When one pick ups polygon object—TV image of object itself can be processed, or more likely special ID data on the object or incorporated with the target datum's can be accessed by the computer to recognize the object, and call up the desired image—of the object, or of something it represents. Then as you move it, it moves—but you elaborate on computer rendition of it in due course given the users input and work, it gradually morphs to a car! (It could be a standard car instantly if the polygon were told to the computer to be a car). [0581]
  • One can draw on the computer screen, on a pad of paper or easel, or in the air with the invention. Computer instructions can come form all conventional sources, such as keyboards mice and voice recognition systems, but also from gestures and movement sequences for example using the TV camera sensing aspect of the invention. [0582]
  • Note that for example a targeted paint brush can instantly provide a real feeling way to use painting type programs. While painting itself is a 2D activity on the paper, the 3D sensing aspect of the invention is used to determine when the brush is applied to the paper, or lifted off, and in the case of pressing the brush down to spread the rush, the z axis movement into the plane of the paper determines how much spreading takes place (paper plane defined as xy). [0583]
  • The 3D aspect is also used to allow the coordinate system to be transformed between the xyz as so defined, and the angulation of the easel with respect to the camera system wherever it is placed typically overhead, in front or to the side somewhere This freedom of placement is a major advantage of the invention, as is the freedom of choice of where targets are located on objects, thanks to the two camera stereo system in particulars ability to solve all necessary photogrammetric equations. [0584]
  • Note too that the angle of the brush or a pen held in hand with respect to the z axis can also be used to instruct the computer, as can any motion pattern of the brush either o the paper or waved in the air. [0585]
  • In CAD activities, the computer can be so instructed as to Parametric shape parameters such as % of circle and square. As with the brush, the height in z may be used to control an object width for example. [0586]
  • Illustrated too are a computer aided design system (CAD) embodiment according to the invention which illustrates particularly the application of specialized sculpture tools with both single and two alias object inputs, useful for design of automobiles, clothes and other applications. [0587]
  • Physical feel of object in each hand is unique, and combines feel with sight on screen—it feels like what it is shown to be, even if it isn't really. Feel can be rigid, semi rigid, or indeed one can actually remove (or add) material from alias object. [0588]
  • Where two or more alias or surrogate objects according to the invention, for example for use in sculpture, whittling and other solid design purposes with one, two, or more coordinated objects. [0589]
  • Illustrated were additional alias objects according to the invention, for example for use in sculpture, whittling and other solid design purposes with one, two, or more coordinated objects. [0590]
  • The unique ability of the invention to easily create usable and physically real alias objects results from the ease in creating targeted objects which can be easily seen at high speed by low cost TV and computer equipment (high speed is here defined as greater than 3 frames per second say, and low cost is under $5000 for the complete system including camera, light source(s), computer and display (multiple camera version somewhat higher). [0591]
  • The objects can be anything on which 3M Scotch light 7615 type retro-reflective material can be placed, or other reflective or high contrast material incorporated in to the surface of an object. You can stick them on fingers, toys or whatever, and can be easily removed if desired. With two (or more) camera stereo systems, no particular way of putting them on is needed, one can solve photogrammetrically for any non co-linear set of three to determine object position and orientation, and any one target can be found in x y and z. [0592]
  • The physical nature of the alias object, is a very important aspect of the invention. It feels like a real object, even though it's a simple targeted block, one feels that it is a car, when you view the car representation on the screen that the block position commands. Feel object, look at screen, this is totally different than controlling an object on a screen with a mouse. [0593]
  • Even more exciting and useful is the relative juxtaposition of two objects, with both on the screen. [0594]
  • For example, a child can affix special targets (using velcro, tape, pins, or other means) on his favorite stuffed toys and then he can have them play with each other, or even a third. Or two children can play, each with their own doll or stuffed animal. But on screen, they convert the play into any kind of animal, including scenery (e.g. a barnyard). The animals can have voice added in some way, either by the computer, or by prerecorded sounds, or in real time via microphones. Via the internet, new voice inputs or other game inputs can be downloaded at will from assisting sites. And programs, and voice, and tv imagry can be exchanged between users. [0595]
  • Computer imagery of the actual animal can be taken using the same TV camera, recorded, and the 3D position determined during play, and the image transformed into a 3D image, rotated or whatever. [0596]
  • The same argument of attaching targets to toys, applies to objects which are the physical manifestations of learned skills [0597]
  • A pencil to a draftsman [0598]
  • A scissors, chalk, and rule to a dressmaker [0599]
  • A brush to an artist [0600]
  • An instrument or portion(eg a drumstick, a bow) to a musician [0601]
  • A axe to a lumberjack [0602]
  • A drill, hammer, or saw to a carpenter [0603]
  • A pistol to a policeman or soldier [0604]
  • A scalpel to a surgeon [0605]
  • A drill to a dentist [0606]
  • And so on [0607]
  • Each person can use a real, or alias object (eg a broomstick piece for a hammer) targeted as he chooses, in order to use the audio and visual capabilites of computer generated activity of the invention. All are more natural to him or her, than a mouse! In each case too, the object to be worked on can also be sensed with the invention [0608]
  • The cloth of the dress [0609]
  • The paper(or easel/table) of the artist or draftsman [0610]
  • The violin of the musician (along with the bow) [0611]
  • The log of the lumberjack [0612]
  • The teeth or head of the dental patient, [0613]
  • And so on . . . [0614]
  • The computer program, using the sensor input, can faithfully utilize the input, or it can extrapolate from it. For example rather than play middle C, it can play a whole chord, or knowing the intended piece, play several of the notes in that piece that follow. Similarly, one can start a simulated incision with a scalpel, and actually continue it a distance along the same path the student doctor started. [0615]
  • Sounds, Noise and Visual Cues [0616]
  • The cocking of a hammer on a toy pistol can act as a cue in many cases. A microphone connected to the computer can pick this up and analyze the signature and determine that a gun may be fired. This can cause the vision analysis program looking at the tv image to look for the pistol, and to anticipate the shot. The sound of the gun, rather than a visual indicator, can alternatively be used to cue the displayed image data as well. Two microphones if used, can be used to triangulate on the sound source, and even tell the tv camera where to look. In many cases sound and physical action are related. Sounds for example can be used to pick up a filing noise, to indicate that a alias object was actually being worked by a tool. The TV camera(s) can monitor the position and orientation of each, but the actual contact registered by sound. Or contact could be just the physical proximity of one image to another—however the sound is created by the actual physical contact which is more accurate, and more real to the user. [0617]
  • Signature Recognition [0618]
  • The invention can look for many signatures of object position and movement—including complex sequences. This has been described in another context relative to FIG. 7 for recognizing human gestures. The recognition algorithim can be taught before hand using the position or movement in question as an input, or it may be preprogrammed, to recognize data presented to it from a library, often specific to game/activity of interest. [0619]
  • Such recognition can also be used to Anticipate an action, For example, if a bow string or hand is moved directly back from a bow, recognition is that one is Drawing a bow, and that an arrow may be ready to be shot. The computer can then command the screen display or sound generation speakers to react (eyes, head move, person on screen runs away, etc) Similarly, the actual action of releasing the bow can be sensed, and the program react to the move [0620]
  • It is of use to consider some of what even the simplest version of the invention, illustrated in FIG. 1[0621] a, could accomplish? In the lowest cost case, This uses retroreflective glass bead tape, or jewelry on an object to allow determination in x and y (plane perpendicular to camera axis) of for example
  • 1. position of one or more points on or portions of, or things to do with, babies, game players, old persons, disabled, workers, homemakers, etc. [0622]
  • 2. Determine position of object such as something representing position or value of something else [0623]
  • 3. Determine location of a plurality of parts of the body, a body and an object, two objects simultaneously, etc [0624]
  • 4. With additional software and datums, expand to FIG. 1[0625] b version, and Determine up to six dimensional degrees of freedom of object or of one object or more with respect to each other). Use Single camera but with target set having known relationships. (Single camera photogrammetry).
  • Today, costs involved to do the foregoing would appear to be a USB camera and in the simplest case, no frame board; just right into the computer. This today could result in images being processed at maybe 10 hertz or less. Simple thresh holding, probably color detection would all that would be needed. More sophisticated shape, recognition and finding of complex things in the scene are not required in simple cases with limited background noise, and are aided by use of the retroreflector or LED sources. [0626]
  • The only other equipment that would be needed in this scenario is the lighting unit that would surround the camera. Clearly this would be somewhat camera specific in terms of its attachment and so on. Many cameras, as it would appear that have been designed for internet Cameras and lighting as needed could be built right into the TV display units. [0627]
  • In the simplest case, there would be simply one target and one only. This would allow a simple TV camera to give 2D point position—essentially be a 2D mouse in space (except that absolute position of th point relative to the camera can be determined—the mouse of today is incremental from its starting point). [0628]
  • Some applications [0629]
  • 1. Direct mouse replacement. The mouse today is in 2D and so is this. Generally speaking, depending on where the camera is, this is either the same two dimensions, that is looking down at the work space, or the two dimensions are in another plane. [0630]
  • 2. Indeed one could apply a single target capable of being sensed by the tv camera of the invention on the ordinary mouse (or joystick or other input) of today. This could give more degrees of freedom of information, such as angles or movement off the mouse table surface (z direction). For example, a 3D input device can be produced since the camera would provide XZ (z perpendicular to plane of surface) and the mouse would provide XY (in plane of surface_ so therefore you would have all three dimensions. [0631]
  • 3. Carrying the mouse elaboration one step further, a mouse point could be movable. That is, the target could be wiggled by the finger holding the mouse, to signal a move or other action to the computer. This would then allow you to put inputs to the computer into the device without adding any electrical wires or anything. [0632]
  • 4. Transducers can also be used as single point inputs, for example of pressures or temperatures or anything that would make a target move, for example in the later case the target being on the end of a bimetal strip which changes position with temperature [0633]
  • Application to Multiple Points and Objects [0634]
  • Another application is to register the relative position of one object to another. For example, today the mouse is basically an odometer. It can't really give any positional data relative to something but can only give the distance moved in two directions which is then converted from some home location onto the screen. [0635]
  • The invention however is absolute, as the camera is as well. It can provide data on any point to any other point or even to groups of points—on objects, humans, or both. Even using the simplest form of the invention, one can put a target on a human and track it or find it's position in space. Here again, in the beginning in for example in two dimensions, X and Y only (FIG. 1[0636] a)
  • For example, with a single point one can make mouse adjunct where moving one's head with a target on it provides an input into the computer while still holding the mouse and everything in normal juxtaposition [0637]
  • One step beyond this is to have more than one point on the human. Clearly a finger relative to another finger or a hand relative to another hand, either or both to the head and so on. As has been noted, a method of achieving high contrast and therefore high reliability is to utilize an LED source as the target. This is possible with the invention, but requires wiring on the object, and thus every object that is to be used has to have a power cable or a battery, or a solar cell or other means to actuate the light—a disadvantage if widespread applicability is desired. [0638]
  • The LED in its simplest form can be powered by something that itself is powered. This means an LED on top of the mouse for example. On the other hand, typically the LED would be on an object where you would not like a power cable and this would then mean battery operated. [0639]
  • The idea of remote power transmission to the target LED or other self luminous target however should be noted. It is possible to transmit electromagnetic radiation (radio, IR, etc) to a device on an object, which in turn would generate power to an LED which then converts that to DC or modulated light capable of detection optically. Or the device itself can directly make the conversion. [0640]
  • The basic technical embodiment of the invention illustrated in FIG. 1 uses a single TV camera for viewing a group of 3 or more targets(or special targets able to give up to a 6 degree of freedom solution), or a set of at least two TV cameras for determining 3D location of a number of targets individually, and in combination to provide object orientation. These cameras are today adapted to the computer by use of the USB port or better still, fire wire (IEE 1394). The cameras may be employed to sense natural features of objects as targets, but today for cost and speed reasons, are best used with high contrast targets such as LED sources on the object, or more generally with retro-reflective targets. In the latter case lighting as with IR LED's is provided near the optical axis of each camera used. For scene illumination, which can be done best on alternate camera frames form target image acquisition, broad light sources can be used. Laser pointers are also very useful for creating one or more high contrast indications, simultaneously, or in sequence on object surfaces that can be sensed by the stereo cameras (typically two or more). [0641]
  • Using laser (or other triangulation source projection), or the contacting of an object with a targeted finger or stylus member, an object can be digitized using the same camera system used for target related inputs. This is an important cost justification of total system capability. [0642]
  • Coincidence of action—ie sensed gesture using the invention can be used to judge a voice operated signal legitimate in a noisy background. Similarly other inputs can be judged effectively if combined with the position and movement sensing of the invention. [0643]
  • Invention combined with voice input makes user much more portable—For example can walk around room and indicate to the computer both action and words The target if a plain piece of glass bead retroreflector, cannot be seen typically beyond angles plus or minus 45 degrees from the normal of the reflector aligned with the camera viewing axis. (indeed some material drops out at 30 degrees) When a performer spins around, this condition is easily exceeded, and the data drops out. For this reason, targets pointing in different directions may be desirable. Rather than using several planar targets with the above characteristics, each pointed in a differnet direction say rotationally about the head to toe axis of a dancer say, one can use in some cases multi-directional targets, typically large balls, beads and faceted objects such as diamonds [0644]
  • In some case only 3D locations are needed. The orientation at times is a secondary consideration. In these cases the [0645] target 1650 could be attached to gyroscope 1655 that in turn is attached to a base 1660 by a ball joint 1665 or other free floating mechanical link. The target could be initially tilted directly toward the cameras allowing the cameras to view the target more precisely. The base plate is then attached to the object to be tracked. The position of the attachment can be calculated once the target location and orientation are established. Since the gyroscope would hold the target orientation toward the cameras as the dance turns, this method extends the range of motion allowed by the dancer or other users.
  • It should be noted that many of the embodiments of the invention described do not depend on TV cameras, Stereo imaging, special targets, or the like, but rather can be used with any sort of non contact means by which to determine position of a point, multiple points, or complete position and orientation of the object, or portion of a human used in the embodiment. While optical, and particularly TV camera based systems are preferred for their low cost and wide functionality, ultra sonic and microwaves can also be used as transduction means in many instances. [0646]
  • Note that an object may be physically thrown, kicked, slung, shot, or otherwise directed at the image represented on screen (say at an enemies or some object, or in the case of a baseball game, at a batters strike zone for example), and the thrown object tracked in space by the stereo camera of the invention and/or determined in its trajectory or other function by information relating to the impact on the screen (the latter described in a referenced co-pending application). Damage to the screen is minimized by using front projection onto a wall. [0647]
  • FIG. 22[0648]
  • FIG. 22 illustrates the use of a PSD (position sensitive photodiode)based image sensor as an alternative to, or in conjunction with, a solid state TV camera. Two versions are shown, A single point device, with retro-reflective illumination, or with a battery powered LED source is described, and a multi-point device with LED sources, can also be used A combination of this sensor and a TV camera is also described., as is an alternative using fiber optic sources. In addition a device using such an imaging device and a retroreflective background is presented as an alternative to specialized high reflectance datums on the human for example. [0649]
  • To achieve high signal to noise, the PSD detector can utilize modulated sources, and demodulated PSD outputs as is well known. Detectors of this type are made for example by Sitek in Sweden and Hamamatsu in Japan. Where individual LED targets on the object are used, they may also be individually modulated at different frequencies in order to be distinguished one from the other, and from the background, and/or they may be rippled in sequence. Similarly fiber optically remoted sources may do this as well. [0650]
  • The [0651] camera 2210 is composed of a lens 2215 and a PSD detector 2220, which provides two voltage outputs proportional to the location of an image on its face. When a single bright point such as retroreflective target 2230 is illuminated with a co-axial, or near coaxial light source 2235, a spot 2240 is formed on the PSD face, whose xy location voltage signal 2244 is digitized and entered into the control computer 2250 by known excitation and A-D converter means. Alternatively an LED or other active source can be used in place of the retro and its light source. In either case the background light reaching the PSD is much less than that from the target and effectively ignored. (if it isnt, errors can result, as the PSD is dumb, and cant sort out what is a target from background—except via filtering at the special wavelength of the LED using filter 2247 in front of the detector, or by modulating the led, or LED of the retro light source using modulated power supply 2236—a novel approach which recognizes that the light from this source does not contribute so much to background as to retroreflected return. When a modulated source is used, the led output signal 2244 is demodulated at the same frequency by filter 2245
  • Such PSD systems are fast, and can run at speeds such as 10,000 readings per second, far beyond a tv cameras ability to see a point. This is very desireable where high speed is needed, or where high background noise rejection is required, such as in bright light (eg in a car on a sunny day). A TV camera and a PSD camera as above can be used in concert, where desired.. [0652]
  • A combination of this sensor and a TV camera is now described. As shown a PSD chip such as [0653] 2260 can be built into a TV camera, 2265 having a lens 2270 and a CCD array chip 2271, using a beam splitter 2275 which allows in this case, both to view the same field of view. This allows one, for example, to use the retroreflector illumination such as 2235 for the psd detected target, and the TV camera to obtain normal scene images, or to determine other target presence and location—for example those near the more rapidly and easily detected PSD sensed target (but knowing where it is, via its output signal related to the output scan of the TV camera).
  • An IR (infra-red) led or IR reflecting reflector to be used even with bright room lighting suitable for TV Camera use. The LED or other retroreflection specific light source can light up the whole object, but other effects such as saturation don't concern the TV image as they can if strong retro signals result with tv cameras. [0654]
  • As noted a feature of such a combination allows the PSD sensor system for example to find one target, and use the tv to find the rest made easier once the first one is identified, since the others can be specified apriori to be within a given search area or path from the first target. [0655]
  • It is further noted that an inverse type system can be made, where the background surface (eg on a desk top) appears bright, and the target is black. This can be done with retroreflector material or even white paper on a desk top for example. In this case the target object could be ones finger which would cover up the retro and the psd give a rough output as to its x and y position. By using a strip of one axis PSDs, one can find its position more accurately. For example, 8 parallel PSD detectors [0656] 2280 giving x outputs to an 8 channel common PC computer A-D data acquisition card 2282 can provide finger 2285 location in x and y (the latter only to a level of 1 part in 8), and pointing angle of the finger (roll in the xy plane). This is much faster than a TV camera for this purpose. That is the finger extended to detector 3, and the top end was at VLEFT while the bottom one on detector 2 was a VRIGHT.
  • Previous copending applications illustrate a fiber optic alternative in which light enters the fibers at one point, and is dispersed to a single fiber or a group traveling to the fiber end, which acts then as a target, and can be provided on an object (even during molding or casting thereof. This can be less obtrusive than individual LED's for example. [0657]
  • These applications have also identified a co-target, which is a target put on an object for the purpose of telling a computer based camera obtaining its image, where to look for other targets in the image. This can be useful, as can a special target which is placed on the object in such a way as to indicate the objects orientation and to identify the object itself if desired, just by looking at the target (which is known relative to the data base of the object.). See also U.S. Pat. No. 5,767,525 [0658]
  • Both of these special target types are useful with the invention here disclosed. [0659]
  • FIG. 23[0660]
  • FIG. 23 illustrates inputs to instrumentation and control systems, for example those typically encountered in car dashboards to provide added functionality and to provide aids to drivers, including the handicapped [0661]
  • Illustrated is an embodiment providing input to automotive control systems such as usually associated with car dashboard instrumentation to provide added functionality and to provide aids to drivers, including the handicapped. In this case the car is real, as opposed to the toy illustration of FIG. 4 in which the dash is a toy, or even a make-believe dash, and the car is simulated in its actions via computer imagery and sounds. [0662]
  • As shown, [0663] driver 2301 holds gear shift lever 2302, in the usual manner. Target datum's 2305-2308 are on his thumb and fingers, (or alternatively on a ring, or other jewelry, for example) or his wrist, and are viewed by miniature TV camera stereo pair 2320 and 2321 in the dash nearby the area of the gear lever. Light sources as appropriate are provided with the cameras, particularly of use are IR LED's 2323 and 2326 near each camera respectively.
  • [0664] Computer 2340 reads the output of each TV camera, and computes the position and relative position of the targets either respect to the camera pair, or each other, or to gear lever 2302 (which itself may be targeted if desired, for example with target 2310), or to some other reference. Or the computer may simply look for motion of any object (eg a finger) or target on an object (eg a ring) above some base level of allowable motion, in the event that the user wished to signal an action just by moving his finger say (regardless of its position, or with the condition that it be within a certain window of positions say, such as between 1 and 3 O clock on the steering wheel.). Movement can be detected by comparing successive frames, or by blurred images for example.
  • The driver may with this embodiment, signal a large number of different actions to the computer, just by moving his fingers while holding the gear lever, or as is even more relaxing, letting his hand rest on the gear lever, with fingers pointing down as shown which points datums on the tops of his fingers toward the dash or roof section above the windshield where cameras such as [0665] 2345 and 2346 can be located relatively easily(see also armrests in FIG. 10). It is noted too that the steering wheel 2360, rather than or in addition to the gear lever could also be used as point of observation of the driver (these two locations are where drivers normally rest their hands, but other places such as near armrests etc. could be chosen too). In this instance an advantageous alternate camera location is in the headliner, not shown, which allows viewing of the fingers or targets thereon from above.
  • Indeed the steering wheel is a natural place, where at the 10 and 2 [0666] O'Clock positions 2361 and 2362 in normal driving, one can wiggle ones thumb, or make a pinching gesture with thumb and first finger, which could be programmed to actuate any function allowed by cars control microcomputer 2350 connected to the TV camera processor 2340 (the two could be one in the same, and both likely located underdash). The program could be changed by the user if desired, such that a different motion or position gave a different control function.
  • Actions chosen using finger position, or relative position, or finger motion or path, could be control of heating, lighting, radio, and accessories, or for handicapped and others could even be major functions, such as throttle, brake, etc. [0667]
  • The data needed is analyzed, and fed by the computer to actuate the appropriate control functions of the vehicle, such as increasing fan speed, changing stations and the like. [0668]
  • Clearly things other than fingers could be observed by a suitable camera system of the invention. These include extremities of the body, elbows, arms, and the head. Items actuated by the driver can also be observed much like the car game or toy example of FIG. 4 above. Very low cost and interchangeable actuator control panels could thus be sold to suit the driver whoever it was. This leads to a portion of the instrument panel being able to be individually tailored, without any change in mechanism used to acquire the data. Some people could use buttons, others sliders, and the like, to control for example, the same heating functions. [0669]
  • It is noted that items on the fingers or wrists can also be used as targets, such as rings, bracelets etc. It is also noted that in cars with column mounted shifters, that a single camera or set of cameras overhead or even in the top of the dash can see the drivers fingers and hands on the steering wheel and the shifter, as well as on any signal stalks on the steering column. [0670]
  • FIG. 24[0671]
  • FIG. 24 illustrates a control system for use with “do it yourself” target application [0672]
  • LED light sources can be used advantageously as targets with the invention—especially where very high contrast is needed, especially achievable with modulated LED sources, and demodulated PSD based detectors. [0673]
  • However, an advantage of reflective targets, and retro-reflective targets in particular, as opposed to LED targets, is that you can easily put them on an object at very little cost, without requiring the object to have batteries, wires or the like. This means that objects not designed for the purpose, such as a young girls favorite doll can be easily equipped with small unobtrusive colored and/or retro-reflective targets (if suitable natural target features aren't available, as often the case) and this favorite toy becomes the input device to a game of doll house or the like on the screen, with suitable software support the child can have her doll playing in the White House on the screen! And audio can suit as well, for example the first lady could talk back![0674]
  • To recapitulate, if you don't acquire the object with specialized targets in/on it, then you need to apply them to it, if you require the benefit of the increased brightness or contrast they can offer. While future computer advancements may make such artifices unnecessary, today many of the desirable applications disclosed herein depend on same, if response speed, reliability and low cost are paramount. Retroreflective material such as scotchlight 7615 is naturally gray appearing and unless brightly colored for ease of further identification, is quite unobtrusive to the user. Indeed it can be colored the color of the portion of the object on which it is provided to make it even more so. (except of course along the path from the light source illuminating same—not seen by the average user except in rare situations). [0675]
  • Different targets of all sizes can be used, but if the user is to place them, he needs to teach which ones you put where—unless you only put them in specified places which could be pre-entered in a computer program, like green targets on hands, square ones on feet, and so forth. [0676]
  • Data Base Teach-In [0677]
  • The datums on an object can be known apriori relative to other points on the object, and to other datums, by selling the object designed using such knowledge (or measured after the fact to obtain it) and including with it a CD ROMdisc or other computer interfacable storage medium having this data. Alternatively, the user for example, can teach the computer system this information. This is particularly useful when the datums are applied by the user on arbitrary objects. [0678]
  • One can create a simple model of the object by simply using the camera of the invention to acquire a 2D outline of the object on which the target datums can be noted automatically or manally. A more involved 3D digitized model can also be created with the invention, and the datums associated with it [0679]
  • One can hold the object desired up to the tv camera, and use the computer with a special program to try to find good datums anywhere to use given the natural features (eg a bright spot such as a coat button). If one is found, the object can be moved and the degree of funtion at different ranges and angles determined If satisfactory also photogrammetrically for the calculations of locations and orientations desired, this natural datum can be used, and another found. If artificial ones are required, for example nothing else can be reliably found on the object itself, this requirement can be indicated by the program. Or an alternative activity able to use the less capable datums could be suggested to the user.(eg less angular variation, less motion, closer to camera, cover up a distracting portion (eg a belt buckle having glints), etc. [0680]
  • Again you would teach the unit what happens in the normal course of operation If for example, a target was obscured, a prompt command can be provided to the use to say move target to new location or suggest that an additional redundant target be placed on the object. [0681]
  • In the airplane game of FIG. 5, Let us say that the user wants to construct his own object, and just puts 3 retroreflective targets (or a triangular or other shaped target also allowing 4-6 degree of freedom solution) on a plane model he purchases at a store. Then having the software which provides a real airplane video and sounds, he enters a teach mode in the program which steps him thru (or automatically sets him up) for the issues here discussed. [0682]
  • One can input setup information to the computer, for example filling outa table where would be hands, feet, etc. And you can put the object with the target in front of the camera, in a normal position and the thing would be taught if one points it out on the screen, or by other means. [0683]
  • Standard Activity Frameworks [0684]
  • It is considered a very useful characteristic of the invention that standard frameworks for activity can be provided by a vendor on software discs or over the internet, which allow the user to easily construct his own activity. This includes for example: instructions on how to attach datums usually provided with the software Instructions on where to place datums, or select natural datums capable of use including tests, by showing the object with natural datum to a camera used for the invention, and the computer running a test program to determine if the TV image obtained is sufficient for use in some desired mode (realizing it might be sufficient for a less movement or less high speed activity, but not for full motion in a variety of positions over a large depth of field). [0685]
  • The framework can include software for specialized datum detection included with the game kit for example. [0686]
  • The framework can have software to tailor game or other activity software to the taught in positions and movements of the game player (human, doll, or whatever). [0687]
  • A diagnostic and optimization program could look at a few examples of use during a warm-up period or even once a game, for example, got going, and then optimize various parameters to suit, such as: [0688]
  • algorithms for target detection, even varied to suit different portions of the game [0689]
  • Photogrammetric equations, and their optimization for object position and orientation, even varied to suit different portions of the game [0690]
  • Lighting related parameters such as LED power, LED pulse time if used, camera integration time, etc. also even varied to suit different portions of the game, and of course to suit the room, distances from the camera and so on. A warning of slow response, for example, could be given if working parameters were not met, so the user could change a condition if he wished. [0691]
  • As noted above, could suggest final changes to target placement or type for better performance. This could include use of a larger size target in a given location to improve definition, the use of a distinctive shape or color target to improve identification, the use of a retroreflector rather than a plain target (and the associated need for auxiliary lighting along the retroreflector axis), the need for a strong LED target (not preferred for most activity), and so forth [0692]
  • In addition, the standard program framework could assist the user in construction of the activity itself. For example, the airplane game of FIG. 5 could have a library of various display and aural options which the user could select to tailor his game as desired. Indeed such program elements could cross from one game type to another (eg the car dash of FIG. 4 if it were an airplane dash could use the airplane action display imagery employed in the game of FIG. 5). In addition, some elements might cross over to non game activity as well [0693]
  • A flow chart illustrating some of the above steps is shown in FIG. 24 Steps are as follows [0694]
  • A. Load Test and diagnostic software into computer and put object desired in front of TV camera system at typical distance. [0695]
  • B. Determine which if any feature of object is usable as a target datum or if image of a bulk portion of the object (such as head) can be used [0696]
  • C. If added targets are needed per software instruction, affix targets per instruction at recommended locations for the object and game or other activity [0697]
  • D. Test these targets using tv camera system, determine if must be replaced or moved or added targets put on [0698]
  • E. If targets needed to be changed do so and retest [0699]
  • F. Run game with first settings determined [0700]
  • G. Test target s in computer model of game, determine if need changes [0701]
  • H. If so make recommended changes and retest. Changes can be to lighting, target type, target location, camera parmeters, photogrammetric equations, background, etc. [0702]
  • I. Test by moving object in to different positions, orientations and velocities recommended by the game program, [0703]
  • J. If changes suggested, make and retest (optional—one might acquiesce to poorer performance just to get started) [0704]
  • K. Play game one or more times [0705]
  • L. IF desired, record key parameters (target brightness, velocities, ranges in position and orientation, backgrounds etc) for further analysis [0706]
  • M. When game finished analyze further and determine changes if any. [0707]
  • For a pre-made object, idealized for the game, most of the initial steps are unnecessary as long as recommended game settings, light, camera and other parameters are adhered to and surroundings are satisfactory. None the less the test program can be used to optimize these as well. [0708]
  • FIG. 25[0709]
  • FIG. 25 illustrates a game experience with an object represented on a deformable screen. As has also been discussed, one can physically interact with the object screen. For example, if one actually touches the screen, one can deform the screen and measure its deformation. This was described in copending application Ser. No. 08/496,908 incorporated by reference, including physically measuring the indication of deformation of the backside of the screen. [0710]
  • But it can also be done by using target grids on the screen which may only be viewable by infrared means, but where the actual screen itself is physically measured from the front side or the backside, as was described in the previous application. [0711]
  • A boxing dummy such as [0712] 2515 represented as an image on the screen, that one actually hits and deforms is possible using the invention if one considers the screen to be the deformable object. In this case perhaps it is not necessary to actually encode the deformation in the screen 2520 but assume a deformation since one knows where one hit it, by determining a target or other feature position such as 2525 on the hitting object such as boxing glove 2530, observed by camera system 2535 whose images are processed by computer 2540 to obtain glove position. Display processor 2545 uses this glove position data, to modify a computer modeled 3-D data base of an opponent stored in a data base 2550, and drive display 2560, for example providing said display on a large rear projection tv screen 2565.
  • For example, consider where the screen itself is a deformable membrane. In the copending Ser. No. 08/496,908 invention, the screen deformation upon physical contact was measured and used as an input to the game. In this case however, I have illustrated an alternative situation where one determines from position of the object making contact where the hit occured and if desired, the motion involved in the hit(ie its velocity and or trajectory obtained by tracking the targeted glove just before it hit it (which leads to its force and direction of contact using the targeted extremities of the player, in this case playing at boxing (or karate, for example in an another embodiment where feet and hands would be so determined and tracked, for example—elbows too if desired). [0713]
  • In this case, one simply calculates an estimated effect upon the dummy, which in this case is actually fought by the user in terms of the resistance of the screen. It isn't totally lifelike but it is at least a physical response and, if desired, the image of the dummy goes down or recoils or doubles up in pain or whatever (note in this case the projection should desirably be on a flat or slightly curved screen, not a highly curved one which would not have the right shape in more than one position). None of this is very pretty but it sells games![0714]
  • The actual actions can be modeled in a computer program capable of providing a 3D rendered display for near life like representation of the result of an action. This would apply to sword fights, soccer games, and other activity described in this and related applications. For example using a targeted sword, rather than a boxing glove, one can physically slash a real life-size opponent represented by an image on a screen and, since one knows where the slash occurs on the projection tv image by virtue of the target point determination of the sword tip using the camera system of the invention, blood representation can emerge from the screen image, or a simulated head falling off or whatever. [0715]
  • Throwing things need not be bloody. As has been mentioned above and in the applications incorporated by reference, all kinds of sports possibilities exist, such as: [0716]
  • Hitting sports, baseball, cricket, boxing, [0717]
  • Throwing and firing sports such as baseball, shooting, archery, etc. Football (American), football (soccer), hockey, field hockey, lacrosse, etc. played with goalies in the goal. [0718]
  • Games are also possible such as throwing paper airplanes, where one can easily affix to ones plane, light weight scotch-lite retro-reflector targets so as to be able to track its motion using the cameras of the invention in 3 dimensions, using the computer system of the invention for the purpose of scoring the game, or to drive a screen display, or to create sounds, or what have you. Again, imagery from the FIG. 5 airplane game could be employed here as well if desired. [0719]
  • The video gaming experience of the invention goes well beyond that obtainable with today's video games using keyboards, buttons, joysticks, and mice. Perhaps the most dramatic issue is that of the human scale that is possible where the player can indeed interact with a life size, if desired, image on the screen at an affordable price than to the television, particularly the high definition TV. Such displays can also be in three dimensions, as is well known using switchable LCD glasses and other well-known stereo techniques. [0720]
  • The use of such glasses with a touch screen having other novel features itself is shown in a copending invention by Tim Pryor entitled “Man-Machine interfaces” Ser. No. 08/496,908 incorporated by reference herein. Such stereo TV effects if they don't provide a burden on the vision or functioning of the player can provide a very realistic experience. This experience can be used with or without the 3D stereo effects but with the large size screen for a variety of purposes, including gaming and teaching. [0721]
  • One aspect of the invention shown above illustrates a gaming situation with respect to a sword fight. This made totally realistic, but without a great deal of cost, using a high intensity projection TV which is becoming ever cheaper as of this writing. One can interact with the screen or other surfaces onto which it is projected, either in a play fashion, that is by not touching the screen, or in a real fashion by actually touching the screen. In this latter case, the screen may be either rigid, semi-deformable, deformable, or in fact ablated or permanently changed by the action of the game. All of these things are possible by using the targeted objects and the implements such as described to pick up the point at which is the accurate measure of the contact. [0722]
  • For total realism it may be necessary to realize some sort of a force pickup connected with the sword to create a force type experience, but this raises cost. The considerable goal of this invention is to provide all of these new and novel functions at an affordable price by utilizing easily detectable stereo camera sensed datum's on objects and low cost cameras which can be shared, so to speak, with other applications such as Internet telephony and the like. Again, if this is a goal, then retroreflectors make the best datums today, unless the operation is in a controlled region where background discrimination and speed are less of an issue. LEDs are good too, but are cumbersome and obtrusive in many situations, and too heavy or exerting too high a moment in others (eg a paper airplane). [0723]
  • As was pointed out in the aforementioned copending applications, it is possible to change the viewpoint of the image projected or displayed with respect to the head of the player, but also with respect to any of extremities, which themselves might be targeted, or with respect to an implement such as a sword or another object carried by the player. [0724]
  • FIG. 26[0725]
  • A simple way to determine the existence of motion, and to calculate motion vectors with low cost tv cameras is to use the blur of a distinct target during the integration time of the camera. For example, in the TV Camera image [0726] 2601 there is a distinct datum 2605. This is indicative of a LED or retro disc source on an object, for example, with background ignored (by setting an illumination or color threshold for example).
  • Now consider what happens if the object moves during the period of the camera integration (exposure) time, a variable which is often controlled in the camera as a function of light received but could also be controlled to aid the invention here. If the movement is in the x direction, the datum image looks like [0727] 2610 assuming the datum moved in the image field as far as indicated during the time the camera chip integrated light on its face. If the movement was in x and y equally, then the image would be like 2615. Note that intensity of points in the image is less than static for the same integration time, as the resultant light from the datum is spread over more pixels
  • For a simple xy situation, the elongation x′ and y′ of the image in x and y can be used to give a motion vector, since x′ divided by integration time gives the x velocity. [0728]
  • For 3 D motion, this is somewhat more complicated, as the object can move in z as well. And if rotation occurs over long integration times, the elongation will be arc shaped rather than simple straight line case shown. These effects can generally be calculated out by observation of the image (or images if stereo pair of cameras) and by calculation of the 3 D orientation of the object [0729]
  • It is noted that some blurring of target datums can be useful for subpixel resolution enhancement. This can be motion blur, or blur due to a somewhat out of focus condition (effectively making a small luminous target in a large field of view look like a bigger, but less intense, blob covering more pixels). Such a purposeful defocus could even be done with a piezo electric actuation of the camera lens or array chip position, to allow in-focus conditions when not actuated. Or in the simple case of a bandpass filter such as [0730] 25 snapped over the lens 24 in FIG. 1b, this filter could purposely be optically shaped to slightly defocus the system when used for target as opposed to scene viewing.
  • Calibration [0731]
  • Note that in FIG. 15 the sword tip position versus the screen image can alternatively be calculated from a knowledge of the part data base of the sword and 3 points to determine its position and orientation in space, plus a knowledge of where the projected image on the screen lies. This may require calibration in the beginning to for example project using the TV display, the computerized projection of a target point on the display screen, which can be viewed by the TV camera(s) of the invention, and used to set reference marks in space. [0732]
  • The use of screen generated targets allows one to nicely set up the TV cameras used to image objects in relation to points on the screen. (which the objects might try to interact with on a display of something at that physical point). To do this requires that the tV cameras be fixed from the time of set up to use—as is typically the case. More stringent, is that the camera has to be in a position to view the screen. Where this is difficult, for example when the camera face outward from the screen, a mirror can be used for example. The mirror in this case can have fixed marks just like an object, which allow its orientation to be determined by the camera computer system, and thus any error in its pointing angle adjusted. [0733]
  • Screen generated targets can also be used to calibrate the field of view of the camera to take out lens errors and the like, and to adjust relationships between two cameras of a stereo pair (or even more sets of cameras). [0734]
  • For example if two cameras are arbitrarily pointed in the direction of the screen, a spot can be projected on the screen which will register in each camera image. Since the spot position is known in x and y due to projection, and one can measure z with a ruler, the system can calculate the pointing direction of the cameras as a result. [0735]
  • Orientation Codes [0736]
  • Inventions by one of the inventors and his colleagues describe a useful machine readable code for use on objects which can give orientation of the object from the point sensed—and provide an identification of the object as well. One could even call up a server over the internet, and down load a data description of object, and relation of that object to software provided. [0737]
  • It is noted that special targets useful in the invention may be designed of diffractive or holographic based material so as to provide, for example, directional and/or color based responses to light input. This can be used to recognize or identify targets, and for causing desirable light distribution on reflection which aid the detection process by a suitable camera [0738]
  • FIG. 27[0739]
  • Here discussed are convenient high brightness (and contrast) retroreflective target items such as retro-reflective jewelry and makeup according to the invention, which can greatly aid the use of the invention by persons. For example, a wristwatch can contain high specific reflectivity retroreflective glass bead or corner cube material in its face or hand that can be sensed by the camera or cameras of the invention in order to easily find the wrist and hand in a field of view. Similarly rings on the fingers containing such material can greatly aid the ability of the camera system to see the fingers and to get close enough such that relatively simple image processing can find the fingertips from the ring, or with more difficulty, from the wrist watch. Similarly, belt buckles, bracelets, pins, necktie clips and the like can all serve this purpose in a decorative and aesthetically pleasing manner. [0740]
  • Even makeup can be produced whose chemical formulation incorporates retro-reflective beads (typically 0.002-0.003 inch in diameter on an individual basis), such as nail polish, lip stick, eye shadow, and the like which all serve some purpose for computer interaction in various software scenarios (especially the fingertips). Specialized makeup for other parts of the body can be created, e.g. for the wrist, toes or what have you. [0741]
  • Consider ring [0742] 2801 having band 2802 and a “jewel” comprised of a corner cube retro-reflector 2803, capable of very high contrast return signals to near on axis illumination. Or consider that the jewel could be a diamond (real or synthetic) cut to reflect light incident from many angles in somewhat similar manner. Or consider ring 2815 having 5 corner cubes, 2826-2830, each pointing in different directions, to allow operation from a variety of finger positions.
  • Consider too, ring band [0743] 2840 comprised of a base ring, 2845 with retro-reflective bead tape material 2850 attached, and covered with a protective plastic overlay 2855. (thicknesses exaggerated for clarity). The overlay could be either totally transparent, or alternatively of band pass material, that would only allow reflection back of a specific wavelength band,(eg matching an LED illumination wavelength). Or the user might chose to wear multiple rings each of a different color, which could be color identified. Or multiple users, each with a different color, say. Note that A special flat tape type retroreflector can be provided having a microprism grating or grille or a diffraction grating or grille on its face which directionally alters the incoming and outgoing radiation so as to be able to bee seen from more nominal angles than normal material such as Scotchlight 7615 of 3M company.
  • Additional Information re FIG. 1 Embodiment [0744]
  • The retroreflection illumination light source is substantially coaxial with the optical axis of said tv camera when retro used The LED as the preferred source to illuminate reflective targets; [0745]
  • If an LED is used, it has the advantage of low power requirement, self-luminous and of a known wavelength. This means that the camera can be filtered for this wavelength quite easily, although, if it is, it won't see other wavelengths very well by definition. [0746]
  • LED light sources for target illumination are preferable because of the programmability i.e. ease of turning on/off, or modulating on a given frequency or pulse duration and they are low cost and low energy consumption. Operating in the Infrared, they do not bother the user or non-visible wavelengths. [0747]
  • FIG. 1[0748] a has illustrated a simplified version of the invention using even one retro-reflective item such as a ring, a thimble with a target on it, a snap on finger target, a color or retroreflective painted nail or other feature on the person. The camera used for this is either a special camera dedicated to the task or shared with a video-imaging camera.
  • In order to operate the invention, the LED light source (which in one embodiment is comprised of a ring of LEDs such as [0749] 26 around the camera Lens 24, pointing outward at the subjects to be viewed) is turned on, and in one case, a bandpass filter (passing the LED wavelength) such as 25 is placed over the lens of the camera that might be normally used simply for acquiring images for Internet telephony or what have you. This filter can be screwed, slid on or snapped on or any other way that allows it to be easily removed when non-filtered viewing is desired.
  • To make the measurement, the LED's surrounding, in this case in a ring arrangement, surrounding the lens, that is easily attached to the camera by suitable attachments either permanent or in some cases temporary. This is due to the wide variety of nature of cameras today or quasi-permanent via highly sticky adhesive. [0750]
  • It's also an alternative to have the lights not surrounding the lens axis but off to one side but as close as possible for best retro-reflective performance. [0751]
  • The LED's are energized in the particular embodiment here and the LED's are near infrared operating at a wavelength 0.85 micron. They provide the illumination needed without being distracting to the user. Visible LED's are usable too if they dont distract the user. A filter on the front of the camera removes largely the effect of light outside of the wavelength of the illumination. [0752]
  • It is also possible to detection datums on the object without the additional use of auxiliary illumination and the optional wavelength based filtering process described above. This is further possible to do this with white light illumination that can be used to illuminate the object as well as the datums in cases of low light and so on. In this case, it is the desire to have the datums distinguished as possible and particularly useful inventors have found color and shape for this purpose, typically a combination of the two. For example a triangular shaped target can be used whose solution is somewhat different from that above. In this case it's not multiple points as in targets that are used to solve an equation but rather the lines of the edges of the target. [0753]
  • A question to answer, is it required for the camera system to be used for both image production of the object and for viewing certain types of special targets, or can it be just devoted to the special target purpose? In the latter case, the lighting is easier because there is only one issue to contend with; seeing the light reflected from the special target, which typically has high brightness, and /or high contrast or color contrast to its surroundings. This can be done at specialized wave lengths, particularly of interest in the very near infrared (eg 0.75 to 0.9 microns wavelength) where strong LED's sources exist, which is visible to the cameras in general use, but which is not bothersome or unobtrusive to the user. [0754]
  • If the camera is also to be used for general imaging, but not simultaneous with special target detection, a special band pass filter transmissive to the LED, laser or other sufficiently monochromatic light source wavelength can be used to cover the camera lens. The filter is conveniently provided with a chain, or preferably a sliding function, to slide in front of the lens when this function is needed. This function can be automated with, for example, a solinoid at added cost, to provide quick switching. Electronically switchable filters can also be used where faster switching is required. [0755]
  • Where the function is needed concurrently with imaging, more difficulty remains, as the tv camera image contains both target and scene information. Bright retroreflector indications will show bright in the tv scene image as well. One solution is to take two TV images, the first with retro illumination on, and the second with it off. If the frame rate is double the usual display frame rate, no change in response is detected. The integration times of the two frames is likely to be different, being adjusted once for the retro return case, and next for the scene illumination at that instant. To do this quickly in one frame may require special exposure control or retro LED illumination control procedures. [0756]
  • This is also the case when stereo cameras are utilized. The exposure for one, may not be the same as for the other, given different tilt angles of the object. [0757]
  • For two camera stereo imaging, one camera too can be a master, used for conventional images, with the other a slave used only for determining object location. It is noted that if the stereo pair are spaced roughly like the eyes (eg 6-8 inches apart) and pointing straight ahead or nearly so, that the image created can be used to drive a stereo display—this could be of considerable interest at the other end of an internet connection for example, where the other person could view the person being imaged in 3D using “Crystal eyes” or other brands of LCD glasses and appropriate Video displays. [0758]
  • The invention can use special datum's such as round or point source LED's, retro-reflective, or other contrasting material comprising spots or beading defining lines or edges, or it can use natural object features, such as fingertips hands, head, feet, or eyes. Often a judicious combination of natural and object features can be chosen to minimize special features and their application, but to make use of their ease of discovery at high speed in a large field of view. For example, if one finds a high contrast, perhaps specially colored artificial feature, one can reduce the search window in the field of view often to that immediate area around the feature for example, where other related natural (or artificial) features are likely to lie. [0759]
  • Note that in a time sense, one often may be dealing with limited data due to momentary obscuration of some datum's, or the whole object. In this case an anticipated further movement of the object to some future position may be calculated so as to create a small as possible search window for the missing datum's in the future. [0760]
  • Note by combining LEDs of different colors, one can create light which allow illumination of several colors of individual targets, or even create effective white light illumination. Note that in this case the tv camera could employ a bandpass filter passing each of 3 led wavelengths thru, but that's all. This would discriminate against other white light sources, but still allow colored targets to be seen. [0761]
  • Note that other solid state sources than LEDs are also desirable, such as Diode lasers (including diode pumped lasers), superluminous devices and others. [0762]
  • Note that when flat targets become warped, for example when attached to skin or to clothing, their size as viewed changes, so in many cases size by itself is not a good indicator. The same holds true because of different views and their effect on apparent size. Shape of targets too can change, for example a circular target viewed at an angle is an ellipse. All of these issues need to be accounted for in determining target location and identification. [0763]
  • When two stereo pair images are used, the angle between them, and the object, means that each camera may see a somewhat different target shape as well. And its brightness can be different, as pointed out above. It is desirable to optimally detect each target datum in each separate stereo image first, before attempting to match images to determine where the datums coincide, which gives the z axis range. [0764]
  • When many datums are present a match sometimes is difficult. A human can aid the match by identifying target in both camera images during some set up stage. [0765]
  • Other data desired by the system would be if possible an input to tell the user how many users are present (if more than one is comprehended). And is there one hand or two?[0766]
  • This brings up another point and that is how to tell the system that some exception is present or some situation where you would either call up an exception routine or ignore the data and retry. Exceptions can be [0767]
  • Obscured or partially obscured datums. A datum image can be compared with a pre-stored criteria, or previously observed results and indications to the operator or automatic signaling of alternate datum programs be made if conditions warrant. [0768]
  • Confused datums, one behind the other, one hand visible instead of two, one person visible instead of two. [0769]
  • Datum indistinct or suspicious. One can go through a routine to check different aspects of shape if required [0770]
  • Data taking too long to determine existence or position. Possible, look at redundant datum?[0771]
  • Wrong targets are present. The object is not what it was told it was supposed to be? A precheck either manually, or assisted by the TV camera computer system of the invention, of the targets on an object to make sure that they match what the database is supposed to be, to assure both the object is the right one, and I or the targets are correct is desirable [0772]
  • A given range of motions of a object or person is not in the range of motions that has been programmed. In this case a warning to slow down can be given, or suggestions made to speed up the system, such as increase light intensity, target brightness, etc. A motion first check could be done for example by waving ones arms in a certain way that would cause the computer to either register a particular user or the motion captured algorithm to be used or a speed parameter or anything to do with the camera and a light gathering. Ideally a first user should go through a simple training or at least a setup routine where they did certain actions and movements and other things in the range that they expect to use and let the camera system set up to that where possible [0773]
  • Down load of sensor information from storage media or remote sources via the internet and the like. [0774]
  • It is possible to download from an Internet website direct to the computer using known connection technology. Although what is interesting here is to further discuss two other alternatives and that is downloading from the website optically based cues for the function of the target based sensors of this system. In other words, allowing them to change their operational characteristics and not just the characteristics of the activity involving the data obtained using them. In addition, and software agent from a computer at one end of a link can be sent out and determine characteristics and optimize/ make systems at other end work with the first one (and not just for this inventions). This could also be of use for control of video cameras generally [0775]
  • “‘Light’ as used herein, can be electromagnetic waves at x-ray through infra-red wavelengths. [0776]
  • Specialized Definitions Used in the Application
  • Target Volume [0777]
  • A “target Volume” is the volume of space (usually a rectangular solid volume) visible to a video camera or a set of video cameras within which a target will be acquired and its position and/or orientation computed. [0778]
  • Interrupt Member [0779]
  • An “Interrupt member” is a device that senses a signal to the systems computer allowing a computer program to identify the beginning of one path of a target and the end of the preceding path. It can also identify a function, object, or parameter value. Examples of an Interrupt member are: [0780]
  • 1. A given key on the system's keyboard. [0781]
  • 2. A voice recognition system capable of acting on a sound or spoken word. [0782]
  • 3. A button attached to a game port, serial port, parallel port, special input card, or other input port. [0783]
  • 4. A trigger, switch, dial, etc. that can turn on a light or mechanically make visible a new target or sub-target with unique properties of color, shape, and size. [0784]
  • Quant [0785]
  • A “Quant” is a unique discretized or quantized target path (defined by location, orientation, and time information) together with the target's unique identification number (ID). A Quant has an associated ID (identification number). A Quant is composed of a sequence of simple path segments. An example of a Quant that could be used to define command in a CAD drawing system to create a rectangle might be a target sweep to the right punctuated with a short stationary pause followed by an up sweep and pause, a left sweep and pause, a down sweep and pause, and finally ended with a key press on the keyboard. In this example the Quant is stored as a set (4, 1, 2, 3, 4, a, 27) where 4 is the number of path segments, 1-4 are number that identify path segment directions (i.e. right, up, left, down), “a” is the member interrupt (the key press a), and 27 is the target ID. Note that the punctuation that identifies a new path direction could have been a radical change in path direction or target orientation or speed. [0786]
  • Light as used herein includes all electro-magnetic wavelengths from ultraviolet to near infrared[0787]

Claims (62)

What is claimed is:
1. Apparatus for input by a person of data to a computer having a display comprising
One or more Datum means provided on said person, said datum means distinguishable in reflected light
At least one TV Camera having an output
Means for determining from said TV camera output, the position of said datums and/or the orientation of a portion of said person
Means for creating on said display, a representation of at least one object, and;
Means for modifying, manipulating, or positioning said at least one object representation on said screen as a function of the position or orientation of datums or person
2. Apparatus according to claim 1 further including light source means for directing light at said member
3. Apparatus according to claim 1 wherein at least one of said datums is retroreflective
4. Apparatus according to claim 1 wherein at least one of said datums is a natural feature of said member
5. Apparatus according to claim 2 wherein said light source is an LED light source
6. Apparatus according to claim 2 wherein light from said light source is substantially invisible
7. Apparatus according to claim 1 wherein at least one of said datums is distinctive in color
8. Apparatus according to claim 1 wherein at least one of said datums is a distinctive shape
9. Apparatus according to claim 1 wherein at least two cameras are used
10. Apparatus according to claim 9 wherein said cameras provide stereo pair of images of said object
11. Apparatus according to claim 9 wherein said cameras look at different sides of said person
12. Apparatus according to claim 9 wherein said cameras look at different times at said person
13. Apparatus according to claim 1 wherein said cameras are provided with the display
14. Apparatus according to claim 1 including further means of affixing a datum
15. Apparatus according to claim 1 including further voice input means to said computer
16. Apparatus according to claim 1 including further means to allow said camera to see objects associated with said person
17. Apparatus according to claim 1 including bandpass filter means associated with at least one of said cameras
18. A method by which a person may input data to a computer, the method comprising:
providing a target on said person
providing a source of light to create an illumination field;
providing at least one TV camera proximate said light source such that the camera can detect reflection of light from said object in said illumination field
detecting radiation reflected from said within the illumination field to create at least one tv image containing an image of said person
determining from said tv image information concerning the position and/or orientation of said target, and
providing a desired input to said computer using said determined information
19. A method according to claim 20 wherein said member contains at least one retroreflective datum
20. A method according to claim 20 wherein said light source is an LED light source
21. A method according to claim 20 wherein said Light source is substantially invisible
22. A Method for input of information by a person to a computer having a display representing at least one object comprising the steps of
Providing a datum associated with said person
Electro-optically determining, the position of at least one datum on said person in 3 dimensions
Providing a representation of at least one computer generated virtual object on said display, and
Using said determined position or orientation data, manipulating said object displayed by said computer to provide a desired visual display or audio response
23. A method according to claim 24 wherein at least one of said datums is retroreflective
24. A method according to claim 24 wherein said datum is distinctive in color
25. A method according to claim 24 wherein said datum is a distinctive shape
26. A method according to claim 24 wherein at least two cameras are used
27. A method according to claim 24 wherein said cameras provide stereo pair of images of said datum
28. A method according to claim 24 wherein said cameras look at different sides of said datum
29. A method according to claim 24 wherein said cameras are provided with said display
30. A method according to claim 24 including further step of affixing a datum
31. A method according to claim 24 wherein at least one of said datums is a natural object feature
32. A method according to claim 24 including the further step of recognizing voice input
33. A method according to claim 24 including temporary filter means for at least one lens of said cameras
34. A method according to claim 24 including the further step of sensing the gray level image of a portion of said user.
35. A method according to claim 24 including the further step of changing Sound output as a function of said data
36. A method according to claim 24 including the further step of using said display or audio for learning
37. A method according to claim 24 including the further step of analyzing movement of said datum
38. A method according to claim 24 including the further step of determining the position or orientation of a member
39. Means for aiding the determination of locations of points on a human, comprising
means providing decoration for said human, said means easily visible by a TV camera or other electro-optical device, and
Means for temporarily providing said decoration means on said human
40. Apparatus according to claim 39, wherein said decoration means is retroreflective
41. Apparatus according to claim 39, wherein said decoration is selected from a group comprising rings, bracelets, watches, lipstick, nail polish,
42. Apparatus according to claim 39, wherein said decoration is part of clothing
43. A Method for producing a display based experience for a user comprising the steps of;
Providing a computer
Providing a large screen TV display of size greater than 42 inches diagonal, the display being controlled by said computer
Providing at least one electro-optical sensor having an output
Processing in said computer said sensor output
From said processing, determining the position or orientation of a portion of a person and/or object camera, and using said computer,
Modifying said display to create a response to an action of said person.
44. A method according to claim 44 wherein said display is approximately lifesize.
45. A method according to claim 44 wherein said user touches or points at virtual objects depicted on said display
46. A method according to claim 44 wherein said user pinches, or grips virtual objects depicted on said display
47. A method according to claim 44 wherein said display varies as the users view changes
48. Method for activity involving an object, comprising the steps of
49. Providing an object
50. Determining if features can be sensed by a tv camera
51. Affixing special datums to said object where features are required for best sensing results,
52. Recording the locations of features and special datums into a data base.
53. A method according to claim 48 wherein said special datum is easily affixed by hand
54. A method according to claim 48 wherein said special datum is retroreflective
55. A method according to claim 48 wherein said special datum is linear
56. A method according to claim 48 wherein said special datum is curvilinear
57. A method of providing a game or other human activity comprising
Providing an object
Providing a member attached to said object and movable with respect thereto
Determining the position or orientation, or change therein, of said member with an electro-optical sensing system
From said determined position or orientation, or change therein, determining an input parameter to a computer program, and
Using said program, provide said game or other activity
58. A method according to claim 57 wherein said member is movable by said human
59. A method according to claim 57 wherein said member moves as a result of the action of a physical variable
60. A method according to claim 57 including the additional step of determining the position or orientation of a portion of said human
61. A method according to claim 57 Wherein said sensor is comprised of at least one TV camera
62. A method according to claim 57, wherein said position or motion is determined relative to another member or said object
US09/138,339 1997-08-22 1998-08-21 Novel man machine interfaces and applications Abandoned US20020036617A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US09/138,339 US20020036617A1 (en) 1998-08-21 1998-08-21 Novel man machine interfaces and applications
US11/186,898 US7843429B2 (en) 1997-08-22 2005-07-22 Interactive video based games using objects sensed by TV cameras
US12/700,055 US20100134612A1 (en) 1997-08-22 2010-02-04 Method for enhancing well-being of a small child or baby
US12/941,304 US8068095B2 (en) 1997-08-22 2010-11-08 Interactive video based games using objects sensed by tv cameras
US13/267,044 US8614668B2 (en) 1997-08-22 2011-10-06 Interactive video based games using objects sensed by TV cameras
US13/714,693 US8760398B2 (en) 1997-08-22 2012-12-14 Interactive video based games using objects sensed by TV cameras
US13/714,727 US8847887B2 (en) 1997-08-22 2012-12-14 Interactive video based games using objects sensed by TV cameras
US13/850,561 US8736548B2 (en) 1997-08-22 2013-03-26 Interactive video based games using objects sensed by TV cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/138,339 US20020036617A1 (en) 1998-08-21 1998-08-21 Novel man machine interfaces and applications

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US09/433,297 Continuation-In-Part US6750848B1 (en) 1997-08-22 1999-11-03 More useful man machine interfaces and applications
US11/186,898 Continuation US7843429B2 (en) 1997-08-22 2005-07-22 Interactive video based games using objects sensed by TV cameras

Publications (1)

Publication Number Publication Date
US20020036617A1 true US20020036617A1 (en) 2002-03-28

Family

ID=22481590

Family Applications (7)

Application Number Title Priority Date Filing Date
US09/138,339 Abandoned US20020036617A1 (en) 1997-08-22 1998-08-21 Novel man machine interfaces and applications
US11/186,898 Expired - Fee Related US7843429B2 (en) 1997-08-22 2005-07-22 Interactive video based games using objects sensed by TV cameras
US12/941,304 Expired - Fee Related US8068095B2 (en) 1997-08-22 2010-11-08 Interactive video based games using objects sensed by tv cameras
US13/267,044 Expired - Fee Related US8614668B2 (en) 1997-08-22 2011-10-06 Interactive video based games using objects sensed by TV cameras
US13/714,727 Expired - Fee Related US8847887B2 (en) 1997-08-22 2012-12-14 Interactive video based games using objects sensed by TV cameras
US13/714,693 Expired - Fee Related US8760398B2 (en) 1997-08-22 2012-12-14 Interactive video based games using objects sensed by TV cameras
US13/850,561 Expired - Fee Related US8736548B2 (en) 1997-08-22 2013-03-26 Interactive video based games using objects sensed by TV cameras

Family Applications After (6)

Application Number Title Priority Date Filing Date
US11/186,898 Expired - Fee Related US7843429B2 (en) 1997-08-22 2005-07-22 Interactive video based games using objects sensed by TV cameras
US12/941,304 Expired - Fee Related US8068095B2 (en) 1997-08-22 2010-11-08 Interactive video based games using objects sensed by tv cameras
US13/267,044 Expired - Fee Related US8614668B2 (en) 1997-08-22 2011-10-06 Interactive video based games using objects sensed by TV cameras
US13/714,727 Expired - Fee Related US8847887B2 (en) 1997-08-22 2012-12-14 Interactive video based games using objects sensed by TV cameras
US13/714,693 Expired - Fee Related US8760398B2 (en) 1997-08-22 2012-12-14 Interactive video based games using objects sensed by TV cameras
US13/850,561 Expired - Fee Related US8736548B2 (en) 1997-08-22 2013-03-26 Interactive video based games using objects sensed by TV cameras

Country Status (1)

Country Link
US (7) US20020036617A1 (en)

Cited By (337)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020022508A1 (en) * 2000-08-11 2002-02-21 Konami Corporation Fighting video game machine
US20020025061A1 (en) * 2000-08-23 2002-02-28 Leonard Metcalfe High speed and reliable determination of lumber quality using grain influenced distortion effects
US20020126876A1 (en) * 1999-08-10 2002-09-12 Paul George V. Tracking and gesture recognition system particularly suited to vehicular control applications
US20020147650A1 (en) * 2001-02-02 2002-10-10 International Business Machines Corporation Designation and opportunistic tracking of valuables
US20020186205A1 (en) * 2000-01-28 2002-12-12 Masahiko Nakamura Volume-integral type multi directional input apparatus
US6496812B1 (en) * 2000-05-13 2002-12-17 Object Power, Inc. Method and system for measuring and valuing contributions by group members to the achievement of a group goal
GB2379493A (en) * 2001-09-06 2003-03-12 4D Technology Systems Ltd Controlling an electronic device by detecting a handheld member with a camera
US20030113018A1 (en) * 2001-07-18 2003-06-19 Nefian Ara Victor Dynamic gesture recognition from stereo sequences
US6593574B2 (en) * 1999-09-16 2003-07-15 Wayne State University Hand-held sound source gun for infrared imaging of sub-surface defects in materials
US20030189639A1 (en) * 2002-04-09 2003-10-09 Benoit Marchand Method and device for correcting the rotation of a video display
US20030212556A1 (en) * 2002-05-09 2003-11-13 Nefian Ara V. Factorial hidden markov model for audiovisual speech recognition
US20030212552A1 (en) * 2002-05-09 2003-11-13 Liang Lu Hong Face recognition procedure useful for audiovisual speech recognition
US20030227470A1 (en) * 2002-06-06 2003-12-11 Yakup Genc System and method for measuring the registration accuracy of an augmented reality system
EP1376317A2 (en) * 2002-06-19 2004-01-02 Seiko Epson Corporation Image/tactile information input device, image/tactile information input method, and image/tactile information input program
US20040027455A1 (en) * 2000-12-15 2004-02-12 Leonard Reiffel Imaged coded data source tracking product
US20040032594A1 (en) * 2000-11-08 2004-02-19 Gerhard Weber Surface mapping and generating devices and methods for surface mapping and surface generation
WO2004019271A2 (en) * 2002-08-22 2004-03-04 Schick Technologies, Inc. Intra-oral camera coupled directly and independently to a computer
US20040041027A1 (en) * 2000-12-15 2004-03-04 Leonard Reiffel Imaged coded data source transducer product
US20040051680A1 (en) * 2002-09-25 2004-03-18 Azuma Ronald T. Optical see-through augmented reality modified-scale display
US20040059467A1 (en) * 2001-06-14 2004-03-25 Sharper Image Corporation Robot capable of detecting an edge
US20040087378A1 (en) * 2002-11-01 2004-05-06 Poe Lang Enterprise Co., Ltd. Shooting exercise for simultaneous multiple shooters
US20040122675A1 (en) * 2002-12-19 2004-06-24 Nefian Ara Victor Visual feature extraction procedure useful for audiovisual continuous speech recognition
US20040125224A1 (en) * 2000-08-18 2004-07-01 Leonard Reiffel Annotating imaged data product
US20040125076A1 (en) * 2001-06-08 2004-07-01 David Green Method and apparatus for human interface with a computer
US20040131259A1 (en) * 2003-01-06 2004-07-08 Nefian Ara V. Embedded bayesian network for pattern recognition
US20040135766A1 (en) * 2001-08-15 2004-07-15 Leonard Reiffel Imaged toggled data input product
US20040141634A1 (en) * 2002-10-25 2004-07-22 Keiichi Yamamoto Hand pattern switch device
US20040155962A1 (en) * 2003-02-11 2004-08-12 Marks Richard L. Method and apparatus for real time motion capture
US20040179001A1 (en) * 2003-03-11 2004-09-16 Morrison Gerald D. System and method for differentiating between pointers used to contact touch surface
US20040201575A1 (en) * 2003-04-08 2004-10-14 Morrison Gerald D. Auto-aligning touch system and method
US20040207600A1 (en) * 2000-10-24 2004-10-21 Microsoft Corporation System and method for transforming an ordinary computer monitor into a touch screen
US20040233223A1 (en) * 2003-05-22 2004-11-25 Steven Schkolne Physical/digital input methodologies for spatial manipulations and entertainment
US20040239670A1 (en) * 2003-05-29 2004-12-02 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
US20040245469A1 (en) * 1999-09-16 2004-12-09 Wayne State University Hand-held sound source for sonic infrared imaging of defects in materials
US20050023763A1 (en) * 2003-07-30 2005-02-03 Richardson Todd E. Sports simulation system
US20050030296A1 (en) * 2002-03-29 2005-02-10 Xerox Corporation Tactile overlays for screens
US20050063564A1 (en) * 2003-08-11 2005-03-24 Keiichi Yamamoto Hand pattern switch device
US20050077452A1 (en) * 2000-07-05 2005-04-14 Gerald Morrison Camera-based touch system
US20050088424A1 (en) * 2000-07-05 2005-04-28 Gerald Morrison Passive touch system and method of detecting user input
US20050102332A1 (en) * 2000-12-15 2005-05-12 Leonard Reiffel Multi-imager multi-source multi-use coded data source data iInput product
US20050125826A1 (en) * 2003-05-08 2005-06-09 Hunleth Frank A. Control framework with a zoomable graphical user interface for organizing selecting and launching media items
US20050166163A1 (en) * 2004-01-23 2005-07-28 Chang Nelson L.A. Systems and methods of interfacing with a machine
WO2005073838A2 (en) 2004-01-16 2005-08-11 Sony Computer Entertainment Inc. Method and apparatus for light input device
US20050178953A1 (en) * 2004-02-17 2005-08-18 Stephen Worthington Apparatus for detecting a pointer within a region of interest
US20050192185A1 (en) * 2004-02-27 2005-09-01 Saathoff Lee D. Power transmission fluids
EP1575169A2 (en) * 2004-03-10 2005-09-14 ABB PATENT GmbH Proximity switch with signal processing system
US20050213790A1 (en) * 1999-05-19 2005-09-29 Rhoads Geoffrey B Methods for using wireless phones having optical capabilities
US20050214716A1 (en) * 2001-11-08 2005-09-29 Willytech Gmbh Devices and methods for producing denture parts
US20050227811A1 (en) * 1999-12-03 2005-10-13 Nike, Inc. Game pod
US20050231532A1 (en) * 2004-03-31 2005-10-20 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US20050238202A1 (en) * 2004-02-26 2005-10-27 Mitsubishi Fuso Truck And Bus Corporation Hand pattern switching apparatus
EP1601783A2 (en) * 2003-03-13 2005-12-07 Sony Pictures Entertainment Inc. Wheel motion control input device for animation system
DE102004025516A1 (en) * 2004-05-21 2005-12-29 X3D Technologies Gmbh Object, e.g. user`s, fingertip, position determining arrangement ,for use in room, has opto-electronic camera with visible mirror and evaluation unit determining position of object based on signal generated by camera
US20060020369A1 (en) * 2004-03-11 2006-01-26 Taylor Charles E Robot vacuum cleaner
US20060022962A1 (en) * 2002-11-15 2006-02-02 Gerald Morrison Size/scale and orientation determination of a pointer in a camera-based touch system
US20060031786A1 (en) * 2004-08-06 2006-02-09 Hillis W D Method and apparatus continuing action of user gestures performed upon a touch sensitive interactive display in simulation of inertia
US7000840B2 (en) 2000-05-03 2006-02-21 Leonard Reiffel Dual mode data imaging product
US20060038833A1 (en) * 2004-08-19 2006-02-23 Mallinson Dominic S Portable augmented reality device and method
US20060063574A1 (en) * 2003-07-30 2006-03-23 Richardson Todd E Sports simulation system
US7034803B1 (en) 2000-08-18 2006-04-25 Leonard Reiffel Cursor display privacy product
US20060129540A1 (en) * 2004-12-15 2006-06-15 Hillis W D Data store with lock-free stateless paging capability
US20060125799A1 (en) * 2004-08-06 2006-06-15 Hillis W D Touch driven method and apparatus to integrate and display multiple image layers forming alternate depictions of same subject matter
US20060132433A1 (en) * 2000-04-17 2006-06-22 Virtual Technologies, Inc. Interface for controlling a graphical image
US20060182346A1 (en) * 2001-09-17 2006-08-17 National Inst. Of Adv. Industrial Science & Tech. Interface apparatus
US20060187141A1 (en) * 2005-02-18 2006-08-24 Weis Judd W Mobile display
EP1698964A1 (en) * 2003-11-25 2006-09-06 Kenji Nishi Information input unit, storing unit, information input device, and information processing device
US20060237633A1 (en) * 2005-04-21 2006-10-26 Fouquet Julie E Orientation determination utilizing a cordless device
US20060239471A1 (en) * 2003-08-27 2006-10-26 Sony Computer Entertainment Inc. Methods and apparatus for targeted sound detection and characterization
US20060252541A1 (en) * 2002-07-27 2006-11-09 Sony Computer Entertainment Inc. Method and system for applying gearing effects to visual tracking
US20060256081A1 (en) * 2002-07-27 2006-11-16 Sony Computer Entertainment America Inc. Scheme for detecting and tracking user manipulation of a game controller body
US7137711B1 (en) 2000-03-21 2006-11-21 Leonard Reiffel Multi-user retro reflector data input
US20060264260A1 (en) * 2002-07-27 2006-11-23 Sony Computer Entertainment Inc. Detectable and trackable hand-held controller
US20060274032A1 (en) * 2002-07-27 2006-12-07 Xiadong Mao Tracking device for use in obtaining information for controlling game program execution
KR100656315B1 (en) 2005-01-07 2006-12-13 한국과학기술원 Apparatus for console game
US20060282873A1 (en) * 2002-07-27 2006-12-14 Sony Computer Entertainment Inc. Hand-held controller having detectable elements for tracking purposes
US20060287087A1 (en) * 2002-07-27 2006-12-21 Sony Computer Entertainment America Inc. Method for mapping movements of a hand-held controller to game commands
US20060287085A1 (en) * 2002-07-27 2006-12-21 Xiadong Mao Inertially trackable hand-held controller
US20060284832A1 (en) * 2005-06-16 2006-12-21 H.P.B. Optoelectronics Co., Ltd. Method and apparatus for locating a laser spot
US20060287084A1 (en) * 2002-07-27 2006-12-21 Xiadong Mao System, method, and apparatus for three-dimensional input control
US20060287086A1 (en) * 2002-07-27 2006-12-21 Sony Computer Entertainment America Inc. Scheme for translating movements of a hand-held controller into inputs for a system
US20060288313A1 (en) * 2004-08-06 2006-12-21 Hillis W D Bounding box gesture recognition on a touch detecting interactive display
US20060291797A1 (en) * 2003-05-27 2006-12-28 Leonard Reiffel Multi-imager multi-source multi-use coded data source data input product
US20070003915A1 (en) * 2004-08-11 2007-01-04 Templeman James N Simulated locomotion method and apparatus
US20070011159A1 (en) * 2004-12-15 2007-01-11 Hillis W Daniel Distributed data store with an orderstamp to ensure progress
US7165029B2 (en) 2002-05-09 2007-01-16 Intel Corporation Coupled hidden Markov model for audiovisual speech recognition
US20070015558A1 (en) * 2002-07-27 2007-01-18 Sony Computer Entertainment America Inc. Method and apparatus for use in determining an activity level of a user in relation to a system
US20070021208A1 (en) * 2002-07-27 2007-01-25 Xiadong Mao Obtaining input for controlling execution of a game program
US7171043B2 (en) 2002-10-11 2007-01-30 Intel Corporation Image recognition using hidden markov models and coupled hidden markov models
US7184030B2 (en) 2002-06-27 2007-02-27 Smart Technologies Inc. Synchronization of cameras in camera-based touch system to enhance position determination of fast moving objects
US20070046643A1 (en) * 2004-08-06 2007-03-01 Hillis W Daniel State-Based Approach to Gesture Identification
US20070075966A1 (en) * 2002-07-18 2007-04-05 Sony Computer Entertainment Inc. Hand-held computer interactive device
US20070098250A1 (en) * 2003-05-01 2007-05-03 Delta Dansk Elektronik, Lys Og Akustik Man-machine interface based on 3-D positions of the human body
US20070117503A1 (en) * 2005-11-21 2007-05-24 Warminsky Michael F Airflow ceiling ventilation system for an armored tactical personnel and collective training facility
US20070113487A1 (en) * 2005-11-21 2007-05-24 Amec Earth & Environmental, Inc. Re-configurable armored tactical personnel and collective training facility
US20070132785A1 (en) * 2005-03-29 2007-06-14 Ebersole John F Jr Platform for immersive gaming
US7235280B2 (en) 2003-11-12 2007-06-26 Srs Technologies, Inc. Non-intrusive photogrammetric targets
US20070165007A1 (en) * 2006-01-13 2007-07-19 Gerald Morrison Interactive input system
US20070187506A1 (en) * 2001-04-19 2007-08-16 Leonard Reiffel Combined imaging coded data source data acquisition
US20070205994A1 (en) * 2006-03-02 2007-09-06 Taco Van Ieperen Touch system and method for interacting with the same
SG134976A1 (en) * 2002-11-26 2007-09-28 Sony Corp Data input to a computer system defining a three-dimensional model
US20070238539A1 (en) * 2006-03-30 2007-10-11 Wayne Dawe Sports simulation system
US20070242034A1 (en) * 2006-04-03 2007-10-18 Haven Richard E Position determination with reference
US20070250195A1 (en) * 1999-05-19 2007-10-25 Rhoads Geoffrey B Methods and Systems Employing Digital Content
US20070259716A1 (en) * 2004-06-18 2007-11-08 Igt Control of wager-based game using gesture recognition
US20070265075A1 (en) * 2006-05-10 2007-11-15 Sony Computer Entertainment America Inc. Attachable structure for use with hand-held controller having tracking ability
US20080009350A1 (en) * 2003-12-31 2008-01-10 Ganz System and method for toy adoption marketing
US20080009348A1 (en) * 2002-07-31 2008-01-10 Sony Computer Entertainment Inc. Combiner method for altering game gearing
US20080014917A1 (en) * 1999-06-29 2008-01-17 Rhoads Geoffrey B Wireless Mobile Phone Methods
US20080019569A1 (en) * 1999-05-19 2008-01-24 Rhoads Geoffrey B Gestural Use of Wireless Mobile Phone Devices to Signal to Remote Systems
EP1894086A1 (en) * 2005-06-16 2008-03-05 SSD Company Limited Input device, simulated experience method and entertainment system
US20080065291A1 (en) * 2002-11-04 2008-03-13 Automotive Technologies International, Inc. Gesture-Based Control of Vehicular Components
US20080080789A1 (en) * 2006-09-28 2008-04-03 Sony Computer Entertainment Inc. Object detection using video input combined with tilt angle information
US7355593B2 (en) 2004-01-02 2008-04-08 Smart Technologies, Inc. Pointer tracking across multiple overlapping coordinate input sub-regions defining a generally contiguous input region
US20080096657A1 (en) * 2006-10-20 2008-04-24 Sony Computer Entertainment America Inc. Method for aiming and shooting using motion sensing controller
US20080098448A1 (en) * 2006-10-19 2008-04-24 Sony Computer Entertainment America Inc. Controller configured to track user's level of anxiety and other mental and physical attributes
US20080096654A1 (en) * 2006-10-20 2008-04-24 Sony Computer Entertainment America Inc. Game control using three-dimensional motions of controller
US20080100825A1 (en) * 2006-09-28 2008-05-01 Sony Computer Entertainment America Inc. Mapping movements of a hand-held controller to the two-dimensional image plane of a display screen
US20080113799A1 (en) * 2005-05-10 2008-05-15 Pixart Imaging Inc. Orientation device method for coordinate generation employed thereby
US20080113800A1 (en) * 2005-04-20 2008-05-15 Robotic Amusements, Inc. Game With Remotely Controlled Game Vehicles
US20080122805A1 (en) * 2000-10-11 2008-05-29 Peter Smith Books, papers, and downloaded information to facilitate human interaction with computers
US20080133640A1 (en) * 2004-07-27 2008-06-05 Sony Corporation Information Processing Device and Method, Recording Medium, and Program
US20080136775A1 (en) * 2006-12-08 2008-06-12 Conant Carson V Virtual input device for computing
US20080146303A1 (en) * 2003-10-23 2008-06-19 Hiromu Ueshima Game for moving an object on a screen in response to movement of an operation article
US20080153069A1 (en) * 2006-12-22 2008-06-26 Stephan Holzner Method, machine-readable medium and computer concerning the manufacture of dental prostheses
US20080154743A1 (en) * 2006-12-22 2008-06-26 Stephan Holzner Method concerning the transport of dental prostheses
US20080163055A1 (en) * 2006-12-06 2008-07-03 S.H. Ganz Holdings Inc. And 816877 Ontario Limited System and method for product marketing using feature codes
US7411575B2 (en) 2003-09-16 2008-08-12 Smart Technologies Ulc Gesture recognition method and touch system incorporating the same
US20080215974A1 (en) * 2007-03-01 2008-09-04 Phil Harrison Interactive user controlled avatar animations
US20080220867A1 (en) * 2002-07-27 2008-09-11 Sony Computer Entertainment Inc. Methods and systems for applying gearing effects to actions based on input data
US20080259053A1 (en) * 2007-04-11 2008-10-23 John Newton Touch Screen System with Hover and Click Input Methods
US7460110B2 (en) 2004-04-29 2008-12-02 Smart Technologies Ulc Dual mode touch system
US20090021476A1 (en) * 2007-07-20 2009-01-22 Wolfgang Steinle Integrated medical display system
US20090021475A1 (en) * 2007-07-20 2009-01-22 Wolfgang Steinle Method for displaying and/or processing image data of medical origin using gesture recognition
GB2451461A (en) * 2007-07-28 2009-02-04 Naveen Chawla Camera based 3D user and wand tracking human-computer interaction system
US7492357B2 (en) 2004-05-05 2009-02-17 Smart Technologies Ulc Apparatus and method for detecting a pointer relative to a touch surface
US20090054155A1 (en) * 2003-07-02 2009-02-26 Ganz Interactive action figures for gaming systems
US20090058833A1 (en) * 2007-08-30 2009-03-05 John Newton Optical Touchscreen with Improved Illumination
US20090091583A1 (en) * 2007-10-06 2009-04-09 Mccoy Anthony Apparatus and method for on-field virtual reality simulation of US football and other sports
US20090096714A1 (en) * 2006-03-31 2009-04-16 Brother Kogyo Kabushiki Kaisha Image display device
US20090116692A1 (en) * 1998-08-10 2009-05-07 Paul George V Realtime object tracking system
US20090124165A1 (en) * 2000-10-20 2009-05-14 Creative Kingdoms, Llc Wireless toy systems and methods for interactive entertainment
US20090122146A1 (en) * 2002-07-27 2009-05-14 Sony Computer Entertainment Inc. Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US20090131164A1 (en) * 2003-12-31 2009-05-21 Ganz System and method for toy adoption and marketing
US20090172756A1 (en) * 2007-12-31 2009-07-02 Motorola, Inc. Lighting analysis and recommender system for video telephony
US20090207144A1 (en) * 2008-01-07 2009-08-20 Next Holdings Limited Position Sensing System With Edge Positioning Enhancement
US20090215534A1 (en) * 2007-11-14 2009-08-27 Microsoft Corporation Magic wand
US20090213093A1 (en) * 2008-01-07 2009-08-27 Next Holdings Limited Optical position sensor using retroreflection
US20090213094A1 (en) * 2008-01-07 2009-08-27 Next Holdings Limited Optical Position Sensing System and Optical Position Sensor Assembly
US20090233769A1 (en) * 2001-03-07 2009-09-17 Timothy Pryor Motivation and enhancement of physical and mental exercise, rehabilitation, health and social interaction
US20090259967A1 (en) * 2008-04-10 2009-10-15 Davidson Philip L Methods of interfacing with multi-input devices and multi-input display systems employing interfacing techniques
US20090258694A1 (en) * 2007-11-23 2009-10-15 Aristocrat Technologies Australia Pty Limited Gaming system and a method of gaming
US20090268163A1 (en) * 2008-04-28 2009-10-29 Upton Beall Bowden Reconfigurable center stack with touch sensing
US20090278794A1 (en) * 2008-05-09 2009-11-12 Smart Technologies Ulc Interactive Input System With Controlled Lighting
US20090278799A1 (en) * 2008-05-12 2009-11-12 Microsoft Corporation Computer vision-based multi-touch sensing using infrared lasers
US20090277694A1 (en) * 2008-05-09 2009-11-12 Smart Technologies Ulc Interactive Input System And Bezel Therefor
US20090280898A1 (en) * 2006-12-22 2009-11-12 Konami Digital Entertainment Co., Ltd. Game device, method of controlling game device, and information recording medium
US20090282749A1 (en) * 2005-11-21 2009-11-19 Warminsky Michael F Re-configurable armored tactical personnel and collective training facility
US20100001950A1 (en) * 2005-04-21 2010-01-07 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Position determination utilizing a cordless device
US20100009308A1 (en) * 2006-05-05 2010-01-14 Align Technology, Inc. Visualizing and Manipulating Digital Models for Dental Treatment
US20100013768A1 (en) * 2008-07-18 2010-01-21 Apple Inc. Methods and apparatus for processing combinations of kinematical inputs
US20100026470A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation Fusing rfid and vision for surface object tracking
US20100031203A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation User-defined gesture set for surface computing
US20100046796A1 (en) * 2005-06-30 2010-02-25 Koninklijke Philips Electronics, N.V. method of recognizing a motion pattern of an object
US20100062819A1 (en) * 1999-08-30 2010-03-11 Hannigan Brett T Methods and Related Toy and Game Applications Using Encoded Information
US20100070473A1 (en) * 2004-12-15 2010-03-18 Swett Ian Distributed Data Store with a Designated Master to Ensure Consistency
US7687744B2 (en) 2002-05-13 2010-03-30 S.C. Johnson & Son, Inc. Coordinated emission of fragrance, light, and sound
US20100079385A1 (en) * 2008-09-29 2010-04-01 Smart Technologies Ulc Method for calibrating an interactive input system and interactive input system executing the calibration method
US20100090985A1 (en) * 2003-02-14 2010-04-15 Next Holdings Limited Touch screen signal processing
US20100103104A1 (en) * 2008-10-29 2010-04-29 Electronics And Telecommunications Research Institute Apparatus for user interface based on wearable computing environment and method thereof
US20100107184A1 (en) * 2008-10-23 2010-04-29 Peter Rae Shintani TV with eye detection
US20100110005A1 (en) * 2008-11-05 2010-05-06 Smart Technologies Ulc Interactive input system with multi-angle reflector
US20100127986A1 (en) * 2008-11-21 2010-05-27 Chih-Ming Liao Calibration method of projection effect
US20100137064A1 (en) * 2002-10-30 2010-06-03 Nike, Inc. Sigils for Use with Apparel
EP2199948A1 (en) * 2008-12-18 2010-06-23 Koninklijke Philips Electronics N.V. Method of plotting a 3D movement in a 1D graph and of comparing two arbitrary 3D movements
US20100192109A1 (en) * 2007-01-06 2010-07-29 Wayne Carl Westerman Detecting and Interpreting Real-World and Security Gestures on Touch and Hover Sensitive Devices
US20100199221A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Navigation of a virtual plane using depth
US20100235258A1 (en) * 2009-03-13 2010-09-16 Nike, Ine. Method Of Customized Cleat Arrangement
US7805220B2 (en) 2003-03-14 2010-09-28 Sharper Image Acquisition Llc Robot vacuum with internal mapping system
US20100293473A1 (en) * 2009-05-15 2010-11-18 Ganz Unlocking emoticons using feature codes
US20100295782A1 (en) * 2009-05-21 2010-11-25 Yehuda Binder System and method for control based on face ore hand gesture detection
US20100295823A1 (en) * 2009-05-25 2010-11-25 Korea Electronics Technology Institute Apparatus for touching reflection image using an infrared screen
US20100306715A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Gestures Beyond Skeletal
US20100305757A1 (en) * 2009-05-29 2010-12-02 Kuka Roboter Gmbh Method And Device For Controlling An Auxiliary Tool Axis Of A Tool Being Guided By A Manipulator
US7860614B1 (en) * 2005-09-13 2010-12-28 The United States Of America As Represented By The Secretary Of The Army Trainer for robotic vehicle
US20110081970A1 (en) * 2000-02-22 2011-04-07 Creative Kingdoms, Llc Systems and methods for providing interactive game play
US20110080490A1 (en) * 2009-10-07 2011-04-07 Gesturetek, Inc. Proximity object tracker
US7921591B1 (en) * 2009-04-30 2011-04-12 Terry Adcock Flip-up aiming sight
US7932482B2 (en) 2003-02-07 2011-04-26 S.C. Johnson & Son, Inc. Diffuser with light emitting diode nightlight
US20110095977A1 (en) * 2009-10-23 2011-04-28 Smart Technologies Ulc Interactive input system incorporating multi-angle reflecting structure
US20110148822A1 (en) * 2009-12-22 2011-06-23 Korea Electronics Technology Institute Three-Dimensional Space Touch Apparatus Using Multiple Infrared Cameras
US20110148821A1 (en) * 2009-12-22 2011-06-23 Korea Electronics Technology Institute Infrared Screen-Type Space Touch Apparatus
US20110157012A1 (en) * 2009-12-31 2011-06-30 Microsoft Corporation Recognizing interactive media input
US20110197263A1 (en) * 2010-02-11 2011-08-11 Verizon Patent And Licensing, Inc. Systems and methods for providing a spatial-input-based multi-user shared display experience
US20110199387A1 (en) * 2009-11-24 2011-08-18 John David Newton Activating Features on an Imaging Device Based on Manipulations
US20110205155A1 (en) * 2009-12-04 2011-08-25 John David Newton Methods and Systems for Position Detection Using an Interactive Volume
US20110205189A1 (en) * 2008-10-02 2011-08-25 John David Newton Stereo Optical Sensors for Resolving Multi-Touch in a Touch Detection System
US20110221919A1 (en) * 2010-03-11 2011-09-15 Wenbo Zhang Apparatus, method, and system for identifying laser spot
US20110234542A1 (en) * 2010-03-26 2011-09-29 Paul Marson Methods and Systems Utilizing Multiple Wavelengths for Position Detection
USRE42794E1 (en) 1999-12-27 2011-10-04 Smart Technologies Ulc Information-inputting device inputting contact point of object on recording surfaces as information
CN102243528A (en) * 2010-06-22 2011-11-16 微软公司 Providing directional force feedback in free space
CN102279670A (en) * 2010-06-09 2011-12-14 波音公司 Gesture-based human machine interface
US8094137B2 (en) 2007-07-23 2012-01-10 Smart Technologies Ulc System and method of detecting contact on a display
USRE43084E1 (en) 1999-10-29 2012-01-10 Smart Technologies Ulc Method and apparatus for inputting information including coordinate data
EP2351604A3 (en) * 2006-05-04 2012-01-25 Sony Computer Entertainment America LLC Obtaining input for controlling execution of a game program
US20120021386A1 (en) * 2007-08-01 2012-01-26 Airmax Group Plc Method and apparatus for providing information about a vehicle
US8108484B2 (en) 1999-05-19 2012-01-31 Digimarc Corporation Fingerprints and machine-readable codes combined with user characteristics to obtain content or information
GB2482729A (en) * 2010-08-13 2012-02-15 Monnowtone Ltd An augmented reality musical instrument simulation system
US8120596B2 (en) 2004-05-21 2012-02-21 Smart Technologies Ulc Tiled touch system
US8131781B2 (en) * 2004-12-15 2012-03-06 Applied Minds, Llc Anti-item for deletion of content in a distributed datastore
US8149221B2 (en) 2004-05-07 2012-04-03 Next Holdings Limited Touch panel display system with illumination and detection provided from a single edge
US20120110447A1 (en) * 2010-11-01 2012-05-03 Sony Computer Entertainment Inc. Control of virtual object using device touch interface functionality
US20120139912A1 (en) * 2007-03-06 2012-06-07 Wildtangent, Inc. Rendering of two-dimensional markup messages
US8206219B2 (en) 2002-10-30 2012-06-26 Nike, Inc. Interactive gaming apparel for interactive gaming
WO2012086984A2 (en) 2010-12-21 2012-06-28 Samsung Electronics Co., Ltd. Method, device, and system for providing sensory information and sense
CN102566825A (en) * 2010-12-08 2012-07-11 纬创资通股份有限公司 Method for positioning compensation and optical touch module thereof
US20120192114A1 (en) * 2011-01-20 2012-07-26 Research In Motion Corporation Three-dimensional, multi-depth presentation of icons associated with a user interface
US20120194553A1 (en) * 2010-02-28 2012-08-02 Osterhout Group, Inc. Ar glasses with sensor and user action based control of external devices with feedback
WO2012104772A1 (en) 2011-02-04 2012-08-09 Koninklijke Philips Electronics N.V. Gesture controllable system uses proprioception to create absolute frame of reference
EP2460569A3 (en) * 2006-05-04 2012-08-29 Sony Computer Entertainment America LLC Scheme for Detecting and Tracking User Manipulation of a Game Controller Body and for Translating Movements Thereof into Inputs and Game Commands
US20120238366A1 (en) * 2011-03-15 2012-09-20 Maurice Tedder Robot Game for Multiple Players that is Remotely Controlled over a Network
CN102736733A (en) * 2011-04-15 2012-10-17 英吉尼克斯公司 Electronic systems with touch free input devices and associated methods
US20120314022A1 (en) * 2011-06-13 2012-12-13 Samsung Electronics Co., Ltd. Display apparatus and method for controlling display apparatus and remote controller
US20120323364A1 (en) * 2010-01-14 2012-12-20 Rainer Birkenbach Controlling a surgical navigation system
US8384693B2 (en) 2007-08-30 2013-02-26 Next Holdings Limited Low profile touch panel systems
USRE44054E1 (en) 2000-12-08 2013-03-05 Ganz Graphic chatting with organizational avatars
US8391851B2 (en) 1999-11-03 2013-03-05 Digimarc Corporation Gestural techniques with wireless mobile phone devices
EP2590047A1 (en) * 2011-11-04 2013-05-08 Tobii Technology AB Portable device
US20130113993A1 (en) * 2011-11-04 2013-05-09 Remote TelePointer, LLC Method and system for user interface for interactive devices using a mobile device
US8456447B2 (en) 2003-02-14 2013-06-04 Next Holdings Limited Touch screen signal processing
US8456418B2 (en) 2003-10-09 2013-06-04 Smart Technologies Ulc Apparatus for determining the location of a pointer within a region of interest
US8467978B2 (en) 2010-08-31 2013-06-18 The Boeing Company Identifying features on a surface of an object using wavelet analysis
US8508508B2 (en) 2003-02-14 2013-08-13 Next Holdings Limited Touch screen signal processing with single-point calibration
US20130231771A1 (en) * 2012-03-02 2013-09-05 Massachusetts Institute Of Technology Methods and Apparatus for Handheld Tool
US8538562B2 (en) 2000-03-07 2013-09-17 Motion Games, Llc Camera based interactive exercise
US20130253733A1 (en) * 2012-03-26 2013-09-26 Hon Hai Precision Industry Co., Ltd. Computing device and method for controlling unmanned aerial vehicle in flight space
US20130252214A1 (en) * 2004-09-27 2013-09-26 123 Certification, Inc. Body motion training and qualification system and method
EP2645303A2 (en) * 2006-07-13 2013-10-02 Northrop Grumman Systems Corporation Gesture recognition inrterface system
US8593402B2 (en) 2010-04-30 2013-11-26 Verizon Patent And Licensing Inc. Spatial-input-based cursor projection systems and methods
US8654198B2 (en) 1999-05-11 2014-02-18 Timothy R. Pryor Camera based interaction and instruction
US20140061478A1 (en) * 2011-11-28 2014-03-06 Eads Deutschland Gmbh Method and Device for Tracking a Moving Target Object
US8668584B2 (en) 2004-08-19 2014-03-11 Igt Virtual input system
US8686579B2 (en) 2000-02-22 2014-04-01 Creative Kingdoms, Llc Dual-range wireless controller
US20140092016A1 (en) * 2012-09-28 2014-04-03 Pixart Imaging Inc. Handheld Pointing Device and Operation Method Thereof
US8692768B2 (en) 2009-07-10 2014-04-08 Smart Technologies Ulc Interactive input system
CN103729092A (en) * 2012-10-12 2014-04-16 原相科技股份有限公司 Handheld pointing device and control method thereof
US8702515B2 (en) 2002-04-05 2014-04-22 Mq Gaming, Llc Multi-platform gaming system using RFID-tagged toys
US8711094B2 (en) 2001-02-22 2014-04-29 Creative Kingdoms, Llc Portable gaming device and gaming system combining both physical and virtual play elements
US8744645B1 (en) * 2013-02-26 2014-06-03 Honda Motor Co., Ltd. System and method for incorporating gesture and voice recognition into a single system
US8758136B2 (en) 1999-02-26 2014-06-24 Mq Gaming, Llc Multi-platform gaming systems and methods
US20140254870A1 (en) * 2013-03-11 2014-09-11 Lenovo (Singapore) Pte. Ltd. Method for recognizing motion gesture commands
US20140267033A1 (en) * 2013-03-14 2014-09-18 Omnivision Technologies, Inc. Information Technology Device Input Systems And Associated Methods
US20140282008A1 (en) * 2011-10-20 2014-09-18 Koninklijke Philips N.V. Holographic user interfaces for medical procedures
US20140280748A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Cooperative federation of digital devices via proxemics and device micro-mobility
US8842166B2 (en) 2010-06-14 2014-09-23 Nintendo Co., Ltd. Storage medium having stored therein stereoscopic image display program, stereoscopic image display device, stereoscopic image display system, and stereoscopic image display method
US20140317576A1 (en) * 2011-12-06 2014-10-23 Thomson Licensing Method and system for responding to user's selection gesture of object displayed in three dimensions
US20140325459A1 (en) * 2004-02-06 2014-10-30 Nokia Corporation Gesture control system
CN104345881A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Method for processing information and electronic equipment
US8957856B2 (en) 2010-10-21 2015-02-17 Verizon Patent And Licensing Inc. Systems, methods, and apparatuses for spatial input associated with a display
US8976265B2 (en) 2002-07-27 2015-03-10 Sony Computer Entertainment Inc. Apparatus for image and sound capture in a game environment
CN104407693A (en) * 2014-10-23 2015-03-11 张国培 Somatosensory operation method, somatosensory operation system and data processing device of small smart device
US20150078617A1 (en) * 2013-09-13 2015-03-19 Research & Business Foundation Sungkyunkwan University Mobile terminal and method for generating control command using marker attached to finger
CN104516660A (en) * 2013-09-27 2015-04-15 联想(北京)有限公司 Information processing method and system and electronic device
US20150120279A1 (en) * 2013-10-28 2015-04-30 Linkedin Corporation Techniques for translating text via wearable computing device
CN104766315A (en) * 2015-03-30 2015-07-08 联想(北京)有限公司 Method for calibrating relative position relation between image collection device and display screen and equipment
US9110503B2 (en) 2012-11-30 2015-08-18 WorldViz LLC Precision position tracking device
US20150231490A1 (en) * 2005-11-14 2015-08-20 Microsoft Technology Licensing, Llc Stereo video for gaming
US20150294480A1 (en) * 2005-10-26 2015-10-15 Sony Computer Entertainment Inc. Control Device for Communicating Visual Information
US9167289B2 (en) 2010-09-02 2015-10-20 Verizon Patent And Licensing Inc. Perspective display systems and methods
US9174119B2 (en) 2002-07-27 2015-11-03 Sony Computer Entertainement America, LLC Controller for providing inputs to control execution of a program when inputs are combined
US9199153B2 (en) 2003-07-30 2015-12-01 Interactive Sports Technologies Inc. Golf simulation system with reflective projectile marking
US20160027199A1 (en) * 2013-03-01 2016-01-28 Xiang Cao Object creation using body gestures
US9272206B2 (en) 2002-04-05 2016-03-01 Mq Gaming, Llc System and method for playing an interactive game
EP2857958A4 (en) * 2012-05-30 2016-03-23 Nec Corp Information processing system, information processing method, communication terminal, information processing device and control method and control program therefor
US9304593B2 (en) 1998-08-10 2016-04-05 Cybernet Systems Corporation Behavior recognition system
US20160104037A1 (en) * 2013-05-07 2016-04-14 Zienon Llc Method and device for generating motion signature on the basis of motion signature information
US9442607B2 (en) 2006-12-04 2016-09-13 Smart Technologies Inc. Interactive input system and method
US9446319B2 (en) 2003-03-25 2016-09-20 Mq Gaming, Llc Interactive gaming toy
CN106156398A (en) * 2015-05-12 2016-11-23 西门子保健有限责任公司 For the operating equipment of area of computer aided simulation and method
EP3133592A1 (en) * 2015-08-19 2017-02-22 Samsung Electronics Co., Ltd. Display apparatus and controlling method thereof for the selection of clothes
US9612656B2 (en) 2012-11-27 2017-04-04 Facebook, Inc. Systems and methods of eye tracking control on mobile device
US20170098014A1 (en) * 2015-10-06 2017-04-06 Radix Inc. System and Method for Generating Digital Information and Altering Digital Models of Components Wtih Same
CN106601217A (en) * 2016-12-06 2017-04-26 北京邮电大学 Interactive-type musical instrument performing method and device
US20170123510A1 (en) * 2010-02-23 2017-05-04 Muv Interactive Ltd. System for projecting content to a display surface having user- controlled size, shape and location/direction and apparatus and methods useful in conjunction therewith
US9671868B2 (en) 2013-06-11 2017-06-06 Honeywell International Inc. System and method for volumetric computing
US20170205939A1 (en) * 2015-09-10 2017-07-20 Boe Technology Group Co., Ltd. Method and apparatus for touch responding of wearable device as well as wearable device
US9759917B2 (en) 2010-02-28 2017-09-12 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered AR eyepiece interface to external devices
CN107167077A (en) * 2017-07-07 2017-09-15 京东方科技集团股份有限公司 Stereo Vision Measurement System and stereo vision measurement method
US9785243B2 (en) 2014-01-30 2017-10-10 Honeywell International Inc. System and method for providing an ergonomic three-dimensional, gesture based, multimodal interface for use in flight deck applications
US20170309057A1 (en) * 2010-06-01 2017-10-26 Vladimir Vaganov 3d digital painting
US9804257B2 (en) 2014-11-13 2017-10-31 WorldViz LLC Methods and systems for an immersive virtual reality system using multiple active markers
US20170319951A1 (en) * 2016-05-03 2017-11-09 Performance Designed Products Llc Video gaming system and method of operation
DE102016208095A1 (en) 2016-05-11 2017-11-16 Bayerische Motoren Werke Aktiengesellschaft Means of transport, arrangement and method for the entertainment of a user of a means of transportation
US9875406B2 (en) 2010-02-28 2018-01-23 Microsoft Technology Licensing, Llc Adjustable extension for temple arm
US20180050266A1 (en) * 2016-08-16 2018-02-22 Square Enix Co., Ltd. Program, article selection system, terminal device and article selection method
DE102016013028A1 (en) * 2016-11-02 2018-05-03 Friedrich-Schiller-Universität Jena Method and device for precise position determination of arrow-like objects relative to surfaces
US9990689B2 (en) 2015-12-16 2018-06-05 WorldViz, Inc. Multi-user virtual reality processing
US10022621B2 (en) * 2006-05-08 2018-07-17 Nintendo Co., Ltd. Methods and apparatus for using illumination marks for spatial pointing
US10037086B2 (en) 2011-11-04 2018-07-31 Tobii Ab Portable device
US20180273201A1 (en) * 2014-10-17 2018-09-27 Sony Corporation Control device, control method, and flight vehicle device
US10095928B2 (en) 2015-12-22 2018-10-09 WorldViz, Inc. Methods and systems for marker identification
US10180572B2 (en) 2010-02-28 2019-01-15 Microsoft Technology Licensing, Llc AR glasses with event and user action control of external applications
US10242501B1 (en) 2016-05-03 2019-03-26 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
US10282742B2 (en) 1999-12-03 2019-05-07 Nike, Inc. Interactive use and athletic performance monitoring and reward method, system, and computer program product
US20190164392A1 (en) * 2016-07-28 2019-05-30 Sandmanden Storsække Service V/ Flemming Petersen Method for warning a user, a wearable warning system and use of the system
US10336188B2 (en) * 2017-03-15 2019-07-02 Subaru Corporation Vehicle display system and method of controlling vehicle display system
US20190262699A1 (en) * 2012-06-04 2019-08-29 Sony Interactive Entertainment Inc. Split-screen presentation based on user location
US10403050B1 (en) * 2017-04-10 2019-09-03 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
US10416708B2 (en) 2015-02-10 2019-09-17 Nintendo Co., Ltd. Accessory and information processing system
CN110362205A (en) * 2012-12-03 2019-10-22 高通股份有限公司 Device and method for the contactless gesture system of infrared ray
US10459527B2 (en) 2011-12-02 2019-10-29 Intel Corporation Techniques for notebook hinge sensors
US10491748B1 (en) 2006-04-03 2019-11-26 Wai Wu Intelligent communication routing system and method
US10495726B2 (en) 2014-11-13 2019-12-03 WorldViz, Inc. Methods and systems for an immersive virtual reality system using multiple active markers
US10539787B2 (en) 2010-02-28 2020-01-21 Microsoft Technology Licensing, Llc Head-worn adaptive display
US10551930B2 (en) * 2003-03-25 2020-02-04 Microsoft Technology Licensing, Llc System and method for executing a process using accelerometer signals
US10607340B2 (en) * 2016-02-17 2020-03-31 Samsung Electronics Co., Ltd. Remote image transmission system, display apparatus, and guide displaying method thereof
US10653964B2 (en) * 1999-05-12 2020-05-19 Wilbert Quinc Murdock Transmitting sensor data created in a game environment to a set of processors outside the game environment based on predefined event determinations
US10656099B2 (en) * 2017-11-28 2020-05-19 Wuhan China Star Optoelectronics Semiconductor Display Technology Co., Ltd. Monitoring method and monitoring apparatus of thimble bases
US10779337B2 (en) 2013-05-07 2020-09-15 Hangzhou Zhileng Technology Co. Ltd. Method, apparatus and system for establishing connection between devices
US10782779B1 (en) * 2018-09-27 2020-09-22 Apple Inc. Feedback coordination for a virtual interaction
US10860100B2 (en) 2010-02-28 2020-12-08 Microsoft Technology Licensing, Llc AR glasses with predictive control of external device based on event input
USRE48417E1 (en) 2006-09-28 2021-02-02 Sony Interactive Entertainment Inc. Object direction using video input combined with tilt angle information
US10922870B2 (en) * 2010-06-01 2021-02-16 Vladimir Vaganov 3D digital painting
USD927996S1 (en) 2019-05-21 2021-08-17 Whirlpool Corporation Cooking assistance appliance
US11117033B2 (en) 2010-04-26 2021-09-14 Wilbert Quinc Murdock Smart system for display of dynamic movement parameters in sports and training
US11123863B2 (en) * 2018-01-23 2021-09-21 Seiko Epson Corporation Teaching device, robot control device, and robot system
US11195233B1 (en) * 2014-06-12 2021-12-07 Allstate Insurance Company Virtual simulation for insurance
US11216887B1 (en) * 2014-06-12 2022-01-04 Allstate Insurance Company Virtual simulation for insurance
US11230005B2 (en) * 2019-01-24 2022-01-25 Fanuc Corporation Following robot and work robot system
US11321408B2 (en) 2004-12-15 2022-05-03 Applied Invention, Llc Data store with lock-free stateless paging capacity
US11358059B2 (en) 2020-05-27 2022-06-14 Ganz Live toy system
US11393037B1 (en) * 2017-06-29 2022-07-19 State Farm Mutual Automobile Insurance Company Movement-based device control
US11389735B2 (en) 2019-10-23 2022-07-19 Ganz Virtual pet system
US11470303B1 (en) 2010-06-24 2022-10-11 Steven M. Hoffberg Two dimensional to three dimensional moving image converter
US11517146B2 (en) 2019-05-21 2022-12-06 Whirlpool Corporation Cooking assistance appliance
US11599257B2 (en) * 2019-11-12 2023-03-07 Cast Group Of Companies Inc. Electronic tracking device and charging apparatus
CN117012140A (en) * 2023-08-01 2023-11-07 苏州旭智设计营造有限公司 Exhibition hall LED screen scene experience type starting control device
US11879959B2 (en) 2019-05-13 2024-01-23 Cast Group Of Companies Inc. Electronic tracking device and related system

Families Citing this family (259)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020036617A1 (en) * 1998-08-21 2002-03-28 Timothy R. Pryor Novel man machine interfaces and applications
GB9722766D0 (en) 1997-10-28 1997-12-24 British Telecomm Portable computers
US6990639B2 (en) * 2002-02-07 2006-01-24 Microsoft Corporation System and process for controlling electronic components in a ubiquitous computing environment using multimodal integration
US7382895B2 (en) * 2002-04-08 2008-06-03 Newton Security, Inc. Tailgating and reverse entry detection, alarm, recording and prevention using machine vision
US7646372B2 (en) * 2003-09-15 2010-01-12 Sony Computer Entertainment Inc. Methods and systems for enabling direction detection when interfacing with a computer program
US7627139B2 (en) * 2002-07-27 2009-12-01 Sony Computer Entertainment Inc. Computer image and audio processing of intensity and input devices for interfacing with a computer program
US7850526B2 (en) * 2002-07-27 2010-12-14 Sony Computer Entertainment America Inc. System for tracking user manipulations within an environment
US7262764B2 (en) * 2002-10-31 2007-08-28 Microsoft Corporation Universal computing device for surface applications
US7665041B2 (en) 2003-03-25 2010-02-16 Microsoft Corporation Architecture for controlling a computer using hand gestures
US7894177B2 (en) 2005-12-29 2011-02-22 Apple Inc. Light activated hold switch
WO2004107266A1 (en) * 2003-05-29 2004-12-09 Honda Motor Co., Ltd. Visual tracking using depth data
WO2004111687A2 (en) * 2003-06-12 2004-12-23 Honda Motor Co., Ltd. Target orientation estimation using depth sensing
US10279254B2 (en) * 2005-10-26 2019-05-07 Sony Interactive Entertainment Inc. Controller having visually trackable object for interfacing with a gaming system
US8323106B2 (en) * 2008-05-30 2012-12-04 Sony Computer Entertainment America Llc Determination of controller three-dimensional location using image analysis and ultrasonic communication
US9573056B2 (en) * 2005-10-26 2017-02-21 Sony Interactive Entertainment Inc. Expandable control device via hardware attachment
US7874917B2 (en) * 2003-09-15 2011-01-25 Sony Computer Entertainment Inc. Methods and systems for enabling depth and direction detection when interfacing with a computer program
US7663689B2 (en) * 2004-01-16 2010-02-16 Sony Computer Entertainment Inc. Method and apparatus for optimizing capture device settings through depth information
EP1731117B1 (en) * 2004-02-26 2010-09-29 State Scientific Center of Russian Fed.-Inst. of Bio-Med. Probl. of the Rus. Acad. of Sciences Suit for forcedly modifying a human posture and producing an increased load on a locomotion apparatus
US7983835B2 (en) 2004-11-03 2011-07-19 Lagassey Paul J Modular intelligent transportation system
US7308112B2 (en) * 2004-05-14 2007-12-11 Honda Motor Co., Ltd. Sign based human-machine interaction
US7782296B2 (en) * 2005-11-08 2010-08-24 Microsoft Corporation Optical tracker for tracking surface-independent movements
US7534988B2 (en) * 2005-11-08 2009-05-19 Microsoft Corporation Method and system for optical tracking of a pointing object
JPWO2007069618A1 (en) * 2005-12-12 2009-05-21 新世代株式会社 Training method, training apparatus, and coordination training method
US8730156B2 (en) * 2010-03-05 2014-05-20 Sony Computer Entertainment America Llc Maintaining multiple views on a shared stable virtual space
US9250703B2 (en) 2006-03-06 2016-02-02 Sony Computer Entertainment Inc. Interface with gaze detection and voice input
JP5089060B2 (en) * 2006-03-14 2012-12-05 株式会社ソニー・コンピュータエンタテインメント Entertainment system and game controller
US20070293124A1 (en) * 2006-06-14 2007-12-20 Motorola, Inc. Method and system for controlling a remote controlled vehicle using two-way communication
US8547428B1 (en) * 2006-11-02 2013-10-01 SeeScan, Inc. Pipe mapping system
US7844915B2 (en) 2007-01-07 2010-11-30 Apple Inc. Application programming interfaces for scrolling operations
US20090017910A1 (en) * 2007-06-22 2009-01-15 Broadcom Corporation Position and motion tracking of an object
US7942485B2 (en) * 2007-05-15 2011-05-17 Kathi Castelluccio Deployable workstation
CA2591808A1 (en) * 2007-07-11 2009-01-11 Hsien-Hsiang Chiu Intelligent object tracking and gestures sensing input device
US20090030767A1 (en) * 2007-07-24 2009-01-29 Microsoft Corporation Scheduling and improving ergonomic breaks using environmental information
DE102007042963A1 (en) * 2007-09-10 2009-03-12 Steinbichler Optotechnik Gmbh Method and device for the three-dimensional digitization of objects
WO2009038797A2 (en) * 2007-09-20 2009-03-26 Evolution Robotics Robotic game systems and methods
TWI372645B (en) * 2007-10-17 2012-09-21 Cywee Group Ltd An electronic game controller with motion-sensing capability
US8005263B2 (en) * 2007-10-26 2011-08-23 Honda Motor Co., Ltd. Hand sign recognition using label assignment
US8669938B2 (en) * 2007-11-20 2014-03-11 Naturalpoint, Inc. Approach for offset motion-based control of a computer
US8277222B2 (en) * 2007-11-28 2012-10-02 Kimberly Ann Shepherd Method and device for diagnosing and applying treatment for the emotional, physical, and cognitive development of a child for a multicultural society
US8542907B2 (en) * 2007-12-17 2013-09-24 Sony Computer Entertainment America Llc Dynamic three-dimensional object mapping for user-defined control device
US8149210B2 (en) * 2007-12-31 2012-04-03 Microsoft International Holdings B.V. Pointing device and method
US8493324B2 (en) * 2008-01-10 2013-07-23 Symax Technology Co., Ltd. Apparatus and method generating interactive signal for a moving article
US8502821B2 (en) * 2008-02-04 2013-08-06 C Speed, Llc System for three-dimensional rendering of electrical test and measurement signals
KR101416235B1 (en) * 2008-02-12 2014-07-07 삼성전자주식회사 Method and apparatus for 3D location input
US8840470B2 (en) * 2008-02-27 2014-09-23 Sony Computer Entertainment America Llc Methods for capturing depth data of a scene and applying computer actions
RU2010141546A (en) * 2008-03-10 2012-04-20 Конинклейке Филипс Электроникс Н.В. (Nl) VIDEO PROCESSING
US8368753B2 (en) 2008-03-17 2013-02-05 Sony Computer Entertainment America Llc Controller with an integrated depth camera
US20090237356A1 (en) * 2008-03-24 2009-09-24 Microsoft Corporation Optical pointing device
JP2009245349A (en) * 2008-03-31 2009-10-22 Namco Bandai Games Inc Position detection system, program, information recording medium, and image generating device
US20090254855A1 (en) * 2008-04-08 2009-10-08 Sony Ericsson Mobile Communications, Ab Communication terminals with superimposed user interface
US8971565B2 (en) * 2008-05-29 2015-03-03 Hie-D Technologies, Llc Human interface electronic device
US9649551B2 (en) 2008-06-03 2017-05-16 Tweedletech, Llc Furniture and building structures comprising sensors for determining the position of one or more objects
US10265609B2 (en) 2008-06-03 2019-04-23 Tweedletech, Llc Intelligent game system for putting intelligence into board and tabletop games including miniatures
US8602857B2 (en) 2008-06-03 2013-12-10 Tweedletech, Llc Intelligent board game system with visual marker based game object tracking and identification
US10155156B2 (en) 2008-06-03 2018-12-18 Tweedletech, Llc Multi-dimensional game comprising interactive physical and virtual components
US8974295B2 (en) 2008-06-03 2015-03-10 Tweedletech, Llc Intelligent game system including intelligent foldable three-dimensional terrain
KR20110039318A (en) * 2008-07-01 2011-04-15 힐크레스트 래보래토리스, 인크. 3d pointer mapping
US8427424B2 (en) 2008-09-30 2013-04-23 Microsoft Corporation Using physical objects in conjunction with an interactive surface
JP4793422B2 (en) * 2008-10-10 2011-10-12 ソニー株式会社 Information processing apparatus, information processing method, information processing system, and information processing program
US8963804B2 (en) * 2008-10-30 2015-02-24 Honeywell International Inc. Method and system for operating a near-to-eye display
US9383814B1 (en) * 2008-11-12 2016-07-05 David G. Capper Plug and play wireless video game
US8961313B2 (en) * 2009-05-29 2015-02-24 Sony Computer Entertainment America Llc Multi-positional three-dimensional controller
US8761434B2 (en) * 2008-12-17 2014-06-24 Sony Computer Entertainment Inc. Tracking system calibration by reconciling inertial data with computed acceleration of a tracked object in the three-dimensional coordinate system
US20100177039A1 (en) * 2009-01-10 2010-07-15 Isaac Grant Finger Indicia Input Device for Computer
US8527657B2 (en) * 2009-03-20 2013-09-03 Sony Computer Entertainment America Llc Methods and systems for dynamically adjusting update rates in multi-player network gaming
US8342963B2 (en) * 2009-04-10 2013-01-01 Sony Computer Entertainment America Inc. Methods and systems for enabling control of artificial intelligence game characters
TWI384427B (en) * 2009-04-29 2013-02-01 Utechzone Co Ltd Background establishment method and device
US8942428B2 (en) * 2009-05-01 2015-01-27 Microsoft Corporation Isolate extraneous motions
US8393964B2 (en) 2009-05-08 2013-03-12 Sony Computer Entertainment America Llc Base station for position location
US8142288B2 (en) 2009-05-08 2012-03-27 Sony Computer Entertainment America Llc Base station movement detection and compensation
US8744121B2 (en) 2009-05-29 2014-06-03 Microsoft Corporation Device for identifying and tracking multiple humans over time
TW201104494A (en) 2009-07-20 2011-02-01 J Touch Corp Stereoscopic image interactive system
US8264486B2 (en) * 2009-07-24 2012-09-11 The United States Of America As Represented By The Secretary Of The Navy Real-time high-speed three dimensional modeling system
US8400398B2 (en) * 2009-08-27 2013-03-19 Schlumberger Technology Corporation Visualization controls
KR101210280B1 (en) * 2009-09-02 2012-12-10 한국전자통신연구원 Sensor-based teaching aid assembly
JP5269745B2 (en) * 2009-10-30 2013-08-21 任天堂株式会社 Object control program, object control apparatus, object control system, and object control method
US8325136B2 (en) 2009-12-01 2012-12-04 Raytheon Company Computer display pointer device for a display
US9486701B2 (en) * 2009-12-30 2016-11-08 Crytek Gmbh Computer-controlled video entertainment system
US8977972B2 (en) * 2009-12-31 2015-03-10 Intel Corporation Using multi-modal input to control multiple objects on a display
JP5898842B2 (en) * 2010-01-14 2016-04-06 任天堂株式会社 Portable information processing device, portable game device
EP2355526A3 (en) * 2010-01-14 2012-10-31 Nintendo Co., Ltd. Computer-readable storage medium having stored therein display control program, display control apparatus, display control system, and display control method
US8730309B2 (en) 2010-02-23 2014-05-20 Microsoft Corporation Projectors and depth cameras for deviceless augmented reality and interaction
JP5800501B2 (en) * 2010-03-12 2015-10-28 任天堂株式会社 Display control program, display control apparatus, display control system, and display control method
US8614663B2 (en) * 2010-03-15 2013-12-24 Empire Technology Development, Llc Selective motor control classification
CN102238350A (en) * 2010-04-30 2011-11-09 鸿富锦精密工业(深圳)有限公司 System and method for controlling television channel switching remotely
US8384770B2 (en) 2010-06-02 2013-02-26 Nintendo Co., Ltd. Image display system, image display apparatus, and image display method
US9086727B2 (en) 2010-06-22 2015-07-21 Microsoft Technology Licensing, Llc Free space directional force feedback apparatus
JP5993856B2 (en) 2010-09-09 2016-09-14 トウィードルテック リミテッド ライアビリティ カンパニー Board game with dynamic feature tracking
JP5058319B2 (en) * 2010-09-14 2012-10-24 株式会社ソニー・コンピュータエンタテインメント Information processing system
JP5101677B2 (en) 2010-09-14 2012-12-19 株式会社ソニー・コンピュータエンタテインメント Information processing system
US8781629B2 (en) * 2010-09-22 2014-07-15 Toyota Motor Engineering & Manufacturing North America, Inc. Human-robot interface apparatuses and methods of controlling robots
WO2012065175A2 (en) * 2010-11-11 2012-05-18 The Johns Hopkins University Human-machine collaborative robotic systems
EP2638455A1 (en) 2010-11-12 2013-09-18 3M Innovative Properties Company Interactive polarization-preserving projection display
TWI528224B (en) * 2010-11-15 2016-04-01 財團法人資訊工業策進會 3d gesture manipulation method and apparatus
TW201222429A (en) * 2010-11-23 2012-06-01 Inventec Corp Web camera device and operating method thereof
JP6021296B2 (en) 2010-12-16 2016-11-09 任天堂株式会社 Display control program, display control device, display control system, and display control method
US9821224B2 (en) * 2010-12-21 2017-11-21 Microsoft Technology Licensing, Llc Driving simulator control with virtual skeleton
US9354718B2 (en) 2010-12-22 2016-05-31 Zspace, Inc. Tightly coupled interactive stereo display
US10025388B2 (en) * 2011-02-10 2018-07-17 Continental Automotive Systems, Inc. Touchless human machine interface
US9329469B2 (en) 2011-02-17 2016-05-03 Microsoft Technology Licensing, Llc Providing an interactive experience using a 3D depth camera and a 3D projector
US9480907B2 (en) 2011-03-02 2016-11-01 Microsoft Technology Licensing, Llc Immersive display with peripheral illusions
US9857868B2 (en) 2011-03-19 2018-01-02 The Board Of Trustees Of The Leland Stanford Junior University Method and system for ergonomic touch-free interface
US9323249B2 (en) * 2011-03-31 2016-04-26 King Abdulaziz City for Science & Technology Matrix code symbols for accurate robot tracking
US20120259638A1 (en) * 2011-04-08 2012-10-11 Sony Computer Entertainment Inc. Apparatus and method for determining relevance of input speech
US8791901B2 (en) * 2011-04-12 2014-07-29 Sony Computer Entertainment, Inc. Object tracking with projected reference patterns
KR101804848B1 (en) * 2011-04-22 2017-12-06 삼성전자주식회사 Video Object Detecting Apparatus, Video Object Deforming Apparatus and Method thereof
US8840466B2 (en) 2011-04-25 2014-09-23 Aquifi, Inc. Method and system to create three-dimensional mapping in a two-dimensional game
US20120277001A1 (en) * 2011-04-28 2012-11-01 Microsoft Corporation Manual and Camera-based Game Control
BR112013028559A2 (en) * 2011-05-09 2017-01-17 Koninkl Philips Nv apparatus for rotating an object on a screen, device, method for rotating an object on a screen, computer program product and storage medium
US10120438B2 (en) 2011-05-25 2018-11-06 Sony Interactive Entertainment Inc. Eye gaze to alter device behavior
JP2012247936A (en) * 2011-05-26 2012-12-13 Sony Corp Information processor, display control method and program
US9597587B2 (en) 2011-06-08 2017-03-21 Microsoft Technology Licensing, Llc Locational node device
US20120320080A1 (en) * 2011-06-14 2012-12-20 Microsoft Corporation Motion based virtual object navigation
FR2976681B1 (en) * 2011-06-17 2013-07-12 Inst Nat Rech Inf Automat SYSTEM FOR COLOCATING A TOUCH SCREEN AND A VIRTUAL OBJECT AND DEVICE FOR HANDLING VIRTUAL OBJECTS USING SUCH A SYSTEM
US20130009944A1 (en) * 2011-07-06 2013-01-10 BrainDistrict 3D computer graphics object and method
US9318129B2 (en) 2011-07-18 2016-04-19 At&T Intellectual Property I, Lp System and method for enhancing speech activity detection using facial feature detection
US9417754B2 (en) 2011-08-05 2016-08-16 P4tents1, LLC User interface system, method, and computer program product
JP5772390B2 (en) * 2011-08-25 2015-09-02 セイコーエプソン株式会社 Display device, display device control method, and program
WO2013029162A1 (en) * 2011-08-31 2013-03-07 Smart Technologies Ulc Detecting pointing gestures iν a three-dimensional graphical user interface
US9778737B1 (en) * 2011-08-31 2017-10-03 Amazon Technologies, Inc. Game recommendations based on gesture type
TWI571790B (en) * 2011-11-10 2017-02-21 財團法人資訊工業策進會 Method and electronic device for changing coordinate values of icons according to a sensing signal
TWI444723B (en) 2011-11-18 2014-07-11 Au Optronics Corp Image eraser of electronic writing system and operating method of electronic writing system
US8510200B2 (en) 2011-12-02 2013-08-13 Spireon, Inc. Geospatial data based assessment of driver behavior
US10169822B2 (en) 2011-12-02 2019-01-01 Spireon, Inc. Insurance rate optimization through driver behavior monitoring
US20130147919A1 (en) * 2011-12-09 2013-06-13 California Institute Of Technology Multi-View Difraction Grating Imaging With Two-Dimensional Displacement Measurement For Three-Dimensional Deformation Or Profile Output
US8811938B2 (en) 2011-12-16 2014-08-19 Microsoft Corporation Providing a user interface experience based on inferred vehicle state
TWI569258B (en) * 2012-01-02 2017-02-01 晨星半導體股份有限公司 Voice control system and associated control method applied to electronic apparatus
WO2013103410A1 (en) 2012-01-05 2013-07-11 California Institute Of Technology Imaging surround systems for touch-free display control
US8782565B2 (en) * 2012-01-12 2014-07-15 Cisco Technology, Inc. System for selecting objects on display
US11493998B2 (en) 2012-01-17 2022-11-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US20130187890A1 (en) * 2012-01-20 2013-07-25 Mun Ki PAEG User interface apparatus and method for 3d space-touch using multiple imaging sensors
US8854433B1 (en) 2012-02-03 2014-10-07 Aquifi, Inc. Method and system enabling natural user interface gestures with an electronic system
TWI482049B (en) * 2012-03-02 2015-04-21 Realtek Semiconductor Corp Multimedia interaction system and related computer program product capable of blocking multimedia interaction commands that against interactive rules
CN103294432B (en) * 2012-03-02 2016-02-10 瑞昱半导体股份有限公司 Stop the multimedia interactive system of interaction instruction and relevant apparatus and method
TWI499935B (en) * 2012-08-30 2015-09-11 Realtek Semiconductor Corp Multimedia interaction system and related computer program product capable of avoiding unexpected interaction behavior
CN104169966A (en) * 2012-03-05 2014-11-26 微软公司 Generation of depth images based upon light falloff
US9245062B2 (en) * 2012-03-22 2016-01-26 Virtek Vision International Inc. Laser projection system using variable part alignment
US9200899B2 (en) * 2012-03-22 2015-12-01 Virtek Vision International, Inc. Laser projection system and method
JP2013213946A (en) * 2012-04-02 2013-10-17 Casio Comput Co Ltd Performance device, method, and program
TWI454966B (en) * 2012-04-24 2014-10-01 Wistron Corp Gesture control method and gesture control device
US9111135B2 (en) * 2012-06-25 2015-08-18 Aquifi, Inc. Systems and methods for tracking human hands using parts based template matching using corresponding pixels in bounded regions of a sequence of frames that are a specified distance interval from a reference camera
US9098739B2 (en) 2012-06-25 2015-08-04 Aquifi, Inc. Systems and methods for tracking human hands using parts based template matching
WO2014008185A1 (en) * 2012-07-02 2014-01-09 Sony Computer Entertainment Inc. Methods and systems for interaction with an expanded information space
US8836768B1 (en) 2012-09-04 2014-09-16 Aquifi, Inc. Method and system enabling natural user interface gestures with user wearable glasses
JP6064464B2 (en) * 2012-09-10 2017-01-25 セイコーエプソン株式会社 Head-mounted display device, head-mounted display device control method, and authentication system
US20140085178A1 (en) * 2012-09-24 2014-03-27 3M Innovative Properties Company Method and apparatus for controlling information display areas in a mirror display
US20140095061A1 (en) * 2012-10-03 2014-04-03 Richard Franklin HYDE Safety distance monitoring of adjacent vehicles
US9779379B2 (en) 2012-11-05 2017-10-03 Spireon, Inc. Container verification through an electrical receptacle and plug associated with a container and a transport vehicle of an intermodal freight transport system
US8933802B2 (en) 2012-11-05 2015-01-13 Spireon, Inc. Switch and actuator coupling in a chassis of a container associated with an intermodal freight transport system
US9303960B2 (en) * 2012-11-06 2016-04-05 Oren Uhr Electronic target for simulated shooting
US9251713B1 (en) 2012-11-20 2016-02-02 Anthony J. Giovanniello System and process for assessing a user and for assisting a user in rehabilitation
TW201423484A (en) * 2012-12-14 2014-06-16 Pixart Imaging Inc Motion detection system
US9776077B2 (en) 2013-01-19 2017-10-03 Cadillac Jack, Inc. Electronic gaming system with human gesturing inputs
US20140179435A1 (en) * 2012-12-20 2014-06-26 Cadillac Jack Electronic gaming system with 3d depth image sensing
TWI486820B (en) * 2012-12-28 2015-06-01 Wistron Corp Coordinate transformation method and computer system for interactive system
WO2014107434A1 (en) 2013-01-02 2014-07-10 California Institute Of Technology Single-sensor system for extracting depth information from image blur
WO2014106862A2 (en) * 2013-01-03 2014-07-10 Suman Saurav A method and system enabling control of different digital devices using gesture or motion control
US9092665B2 (en) 2013-01-30 2015-07-28 Aquifi, Inc Systems and methods for initializing motion tracking of human hands
US9129155B2 (en) 2013-01-30 2015-09-08 Aquifi, Inc. Systems and methods for initializing motion tracking of human hands using template matching within bounded regions determined using a depth map
US10134267B2 (en) 2013-02-22 2018-11-20 Universal City Studios Llc System and method for tracking a passive wand and actuating an effect based on a detected wand path
US20140245200A1 (en) * 2013-02-25 2014-08-28 Leap Motion, Inc. Display control with gesture-selectable control paradigms
JP6286836B2 (en) * 2013-03-04 2018-03-07 株式会社リコー Projection system, projection apparatus, projection method, and projection program
US20140251114A1 (en) * 2013-03-08 2014-09-11 Miselu, Inc. Keyboard system with multiple cameras
US9766709B2 (en) 2013-03-15 2017-09-19 Leap Motion, Inc. Dynamic user interactions for display control
US20140349762A1 (en) * 2013-03-15 2014-11-27 Alfred M. Haas Gtg
US9298266B2 (en) 2013-04-02 2016-03-29 Aquifi, Inc. Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
US9380295B2 (en) * 2013-04-21 2016-06-28 Zspace, Inc. Non-linear navigation of a three dimensional stereoscopic display
US9234742B2 (en) * 2013-05-01 2016-01-12 Faro Technologies, Inc. Method and apparatus for using gestures to control a laser tracker
US9482554B2 (en) * 2013-05-02 2016-11-01 Hillcrest Laboratories, Inc. Gyroscope stabilizer filter
DE102013208709A1 (en) * 2013-05-13 2014-11-13 Bayerische Motoren Werke Aktiengesellschaft Method for determining input data of a driver assistance unit
US11027193B2 (en) 2013-07-01 2021-06-08 Flyingtee Tech, Llc Two-environment game play system
JP2015011328A (en) * 2013-07-02 2015-01-19 株式会社東芝 Liquid crystal optical device and image display device
GB2516282B (en) * 2013-07-17 2017-07-26 Vision Rt Ltd Method of calibration of a stereoscopic camera system for use with a radio therapy treatment apparatus
US9798388B1 (en) 2013-07-31 2017-10-24 Aquifi, Inc. Vibrotactile system to augment 3D input systems
WO2015028712A1 (en) 2013-08-26 2015-03-05 Multitouch Oy A method and system for authentication and a marker therefor
US9779449B2 (en) 2013-08-30 2017-10-03 Spireon, Inc. Veracity determination through comparison of a geospatial location of a vehicle with a provided data
CN103677446A (en) * 2013-11-14 2014-03-26 乐视致新电子科技(天津)有限公司 Display equipment and camera type touch control method and device
US9011246B1 (en) * 2013-11-18 2015-04-21 Scott Kier Systems and methods for immersive backgrounds
US9463379B1 (en) 2013-12-17 2016-10-11 Thinkwell Group Ride vehicle mounted interactive game system
US20150187198A1 (en) * 2013-12-27 2015-07-02 Aaron G. Silverberg Orientation Measurement And Guidance Of Manually Positioned Objects
US20150186991A1 (en) 2013-12-31 2015-07-02 David M. Meyer Creditor alert when a vehicle enters an impound lot
US9507417B2 (en) 2014-01-07 2016-11-29 Aquifi, Inc. Systems and methods for implementing head tracking based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
US9720506B2 (en) 2014-01-14 2017-08-01 Microsoft Technology Licensing, Llc 3D silhouette sensing system
US9560449B2 (en) 2014-01-17 2017-01-31 Sony Corporation Distributed wireless speaker system
US9288597B2 (en) 2014-01-20 2016-03-15 Sony Corporation Distributed wireless speaker system with automatic configuration determination when new speakers are added
US9426551B2 (en) 2014-01-24 2016-08-23 Sony Corporation Distributed wireless speaker system with light show
US9369801B2 (en) 2014-01-24 2016-06-14 Sony Corporation Wireless speaker system with noise cancelation
US9866986B2 (en) 2014-01-24 2018-01-09 Sony Corporation Audio speaker system with virtual music performance
US9619105B1 (en) 2014-01-30 2017-04-11 Aquifi, Inc. Systems and methods for gesture based interaction with viewpoint dependent user interfaces
US9232335B2 (en) 2014-03-06 2016-01-05 Sony Corporation Networked speaker system with follow me
US9483997B2 (en) 2014-03-10 2016-11-01 Sony Corporation Proximity detection of candidate companion display device in same room as primary display using infrared signaling
CN103941865A (en) * 2014-04-08 2014-07-23 三和智控(北京)系统集成有限公司 Method for establishing discrete human-computer interaction mechanism
US9696414B2 (en) 2014-05-15 2017-07-04 Sony Corporation Proximity detection of candidate companion display device in same room as primary display using sonic signaling
US10070291B2 (en) 2014-05-19 2018-09-04 Sony Corporation Proximity detection of candidate companion display device in same room as primary display using low energy bluetooth
US10207193B2 (en) 2014-05-21 2019-02-19 Universal City Studios Llc Optical tracking system for automation of amusement park elements
US9600999B2 (en) 2014-05-21 2017-03-21 Universal City Studios Llc Amusement park element tracking system
US10025990B2 (en) 2014-05-21 2018-07-17 Universal City Studios Llc System and method for tracking vehicles in parking structures and intersections
US9429398B2 (en) 2014-05-21 2016-08-30 Universal City Studios Llc Optical tracking for controlling pyrotechnic show elements
US10061058B2 (en) 2014-05-21 2018-08-28 Universal City Studios Llc Tracking system and method for use in surveying amusement park equipment
US9433870B2 (en) 2014-05-21 2016-09-06 Universal City Studios Llc Ride vehicle tracking and control system using passive tracking elements
US9616350B2 (en) 2014-05-21 2017-04-11 Universal City Studios Llc Enhanced interactivity in an amusement park environment using passive tracking elements
WO2016022097A1 (en) * 2014-08-05 2016-02-11 Hewlett-Packard Development Company, L.P. Determining a position of an input object
US9555284B2 (en) 2014-09-02 2017-01-31 Origin, Llc Multiple sensor tracking system and method
US9977565B2 (en) 2015-02-09 2018-05-22 Leapfrog Enterprises, Inc. Interactive educational system with light emitting controller
US10684485B2 (en) 2015-03-06 2020-06-16 Sony Interactive Entertainment Inc. Tracking system for head mounted display
US10296086B2 (en) 2015-03-20 2019-05-21 Sony Interactive Entertainment Inc. Dynamic gloves to convey sense of touch and movement for virtual objects in HMD rendered environments
US9551788B2 (en) 2015-03-24 2017-01-24 Jim Epler Fleet pan to provide measurement and location of a stored transport item while maximizing space in an interior cavity of a trailer
US10146414B2 (en) * 2015-06-09 2018-12-04 Pearson Education, Inc. Augmented physical and virtual manipulatives
KR102395515B1 (en) * 2015-08-12 2022-05-10 삼성전자주식회사 Touch Event Processing Method and electronic device supporting the same
CN105117017A (en) * 2015-09-07 2015-12-02 众景视界(北京)科技有限公司 Gloves used in interaction control of virtual reality and augmented reality
KR101972472B1 (en) * 2015-10-23 2019-04-25 세종대학교산학협력단 System and method for virtual fitness experience
KR20170057056A (en) * 2015-11-16 2017-05-24 삼성전자주식회사 Remote Control Apparatus, Driving Method of Remote Control Apparatus, Image Display Apparatus, Driving Method of Image Display Apparatus, and Computer Readable Recording Medium
US9693168B1 (en) 2016-02-08 2017-06-27 Sony Corporation Ultrasonic speaker assembly for audio spatial effect
US9826332B2 (en) 2016-02-09 2017-11-21 Sony Corporation Centralized wireless speaker system
US9826330B2 (en) 2016-03-14 2017-11-21 Sony Corporation Gimbal-mounted linear ultrasonic speaker assembly
US9693169B1 (en) 2016-03-16 2017-06-27 Sony Corporation Ultrasonic speaker assembly with ultrasonic room mapping
US10037626B2 (en) 2016-06-30 2018-07-31 Microsoft Technology Licensing, Llc Interaction with virtual objects based on determined restrictions
US9794724B1 (en) 2016-07-20 2017-10-17 Sony Corporation Ultrasonic speaker assembly using variable carrier frequency to establish third dimension sound locating
US10720082B1 (en) * 2016-09-08 2020-07-21 Ctskh, Llc Device and system to teach stem lessons using hands-on learning method
US9854362B1 (en) 2016-10-20 2017-12-26 Sony Corporation Networked speaker system with LED-based wireless communication and object detection
US9924286B1 (en) 2016-10-20 2018-03-20 Sony Corporation Networked speaker system with LED-based wireless communication and personal identifier
US10075791B2 (en) 2016-10-20 2018-09-11 Sony Corporation Networked speaker system with LED-based wireless communication and room mapping
CN106582012B (en) * 2016-12-07 2018-12-11 腾讯科技(深圳)有限公司 Climbing operation processing method and device under a kind of VR scene
KR20180091552A (en) * 2017-02-07 2018-08-16 한국전자통신연구원 Information processing system and information processing method
US10810903B2 (en) 2017-04-05 2020-10-20 Flyingtee Tech, Llc Computerized method of detecting and depicting a travel path of a golf ball
CN107684669B (en) 2017-08-21 2020-04-17 上海联影医疗科技有限公司 System and method for correcting alignment apparatus
CN107688389B (en) * 2017-08-25 2021-08-13 北京金恒博远科技股份有限公司 VR grabbing action optimization method and device
US10110999B1 (en) 2017-09-05 2018-10-23 Motorola Solutions, Inc. Associating a user voice query with head direction
US10224033B1 (en) 2017-09-05 2019-03-05 Motorola Solutions, Inc. Associating a user voice query with head direction
CN111213184A (en) * 2017-11-30 2020-05-29 惠普发展公司,有限责任合伙企业 Virtual dashboard implementation based on augmented reality
US10653957B2 (en) * 2017-12-06 2020-05-19 Universal City Studios Llc Interactive video game system
US10438450B2 (en) 2017-12-20 2019-10-08 Igt Craps gaming system and method
CN108229391B (en) * 2018-01-02 2021-12-24 京东方科技集团股份有限公司 Gesture recognition device, server thereof, gesture recognition system and gesture recognition method
CN108536565A (en) * 2018-01-04 2018-09-14 郑州云海信息技术有限公司 A kind of CPLD-FPGA versions, session information three condition display module implementation method
JP6937995B2 (en) * 2018-04-05 2021-09-22 オムロン株式会社 Object recognition processing device and method, and object picking device and method
US10698206B2 (en) 2018-05-31 2020-06-30 Renault Innovation Silicon Valley Three dimensional augmented reality involving a vehicle
US10682572B2 (en) 2018-07-25 2020-06-16 Cameron Wilson Video game reticle
US10776073B2 (en) 2018-10-08 2020-09-15 Nuance Communications, Inc. System and method for managing a mute button setting for a conference call
US10623859B1 (en) 2018-10-23 2020-04-14 Sony Corporation Networked speaker system with combined power over Ethernet and audio delivery
US11224798B2 (en) 2018-12-27 2022-01-18 Mattel, Inc. Skills game
CN109794064B (en) * 2018-12-29 2020-07-03 腾讯科技(深圳)有限公司 Interactive scenario implementation method, device, terminal and storage medium
WO2020141476A1 (en) * 2019-01-04 2020-07-09 Gentex Corporation Control apparatus and methods for adaptive lighting array
WO2020141475A1 (en) 2019-01-04 2020-07-09 Gentex Corporation Control for adaptive lighting array
WO2020141474A1 (en) 2019-01-04 2020-07-09 Gentex Corporation Authentication and informational displays with adaptive lighting array
TWI736857B (en) * 2019-03-07 2021-08-21 國立臺南藝術大學 An overhang rotatable multi-sensory device and a virtual reality multi-sensory system comprising the same
US11347331B2 (en) 2019-04-08 2022-05-31 Dell Products L.P. Portable information handling system stylus garage and charge interface
US10788865B1 (en) 2019-04-26 2020-09-29 Dell Products L.P. Information handling system dual pivot hinge signal path
US11009936B2 (en) 2019-05-02 2021-05-18 Dell Products L.P. Information handling system power control sensor
US11017742B2 (en) 2019-05-02 2021-05-25 Dell Products L.P. Information handling system multiple display viewing angle brightness adjustment
US11341925B2 (en) 2019-05-02 2022-05-24 Dell Products L.P. Information handling system adapting multiple display visual image presentations
JP7286815B2 (en) 2019-06-20 2023-06-05 ジェンテックス コーポレイション Illumination system and method for object tracking
EP3969882A4 (en) 2019-06-20 2022-06-29 Gentex Corporation System and method for automated modular illumination and deployment
US10942585B2 (en) 2019-07-22 2021-03-09 Zspace, Inc. Trackability enhancement of a passive stylus
US10976818B2 (en) 2019-08-21 2021-04-13 Universal City Studios Llc Interactive attraction system and method for object and user association
CN112902898B (en) * 2019-12-03 2022-11-29 台达电子工业股份有限公司 Three-dimensional measuring device and applicable mechanical arm correction method
CN111726694B (en) * 2020-06-30 2022-06-03 北京奇艺世纪科技有限公司 Interactive video recovery playing method and device, electronic equipment and storage medium
US11165969B1 (en) * 2020-08-03 2021-11-02 Sky Castle Toys LLC System and method for adding auxiliary lights to a camera to create fluorescence in selected features of a captured image
WO2023076128A1 (en) * 2021-10-25 2023-05-04 Within Unlimited, Inc. Virtual and augmented reality boxing activity or game, systems, methods, and devices
WO2024030366A1 (en) * 2022-08-01 2024-02-08 Richard Johnston Computerized method and computing platform for centrally managing skill-based competitions

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4146924A (en) * 1975-09-22 1979-03-27 Board Of Regents For Education Of The State Of Rhode Island System for visually determining position in space and/or orientation in space and apparatus employing same
US4219847A (en) * 1978-03-01 1980-08-26 Canadian Patents & Development Limited Method and apparatus of determining the center of area or centroid of a geometrical area of unspecified shape lying in a larger x-y scan field
US4631847A (en) * 1980-12-01 1986-12-30 Laurence Colin Encapsulated art
US4654949A (en) * 1982-02-16 1987-04-07 Diffracto Ltd. Method for automatically handling, assembling and working on objects
US4672564A (en) * 1984-11-15 1987-06-09 Honeywell Inc. Method and apparatus for determining location and orientation of objects
US4843568A (en) * 1986-04-11 1989-06-27 Krueger Myron W Real time perception of and response to the actions of an unencumbered participant/user
US5148591A (en) * 1981-05-11 1992-09-22 Sensor Adaptive Machines, Inc. Vision target based assembly
US5161078A (en) * 1989-06-06 1992-11-03 U.S. Philips Corporation Magnetic tape system including a rotatable head support for rotating the magnetic head of the system through substantially 180°
US5168531A (en) * 1991-06-27 1992-12-01 Digital Equipment Corporation Real-time recognition of pointing information from video
US5297061A (en) * 1993-05-19 1994-03-22 University Of Maryland Three dimensional pointing device monitored by computer vision
US5388059A (en) * 1992-12-30 1995-02-07 University Of Maryland Computer vision system for accurate monitoring of object pose
US5454043A (en) * 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5506682A (en) * 1982-02-16 1996-04-09 Sensor Adaptive Machines Inc. Robot vision using targets
US5581276A (en) * 1992-09-08 1996-12-03 Kabushiki Kaisha Toshiba 3D human interface apparatus using motion recognition based on dynamic image processing
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
US5617312A (en) * 1993-11-19 1997-04-01 Hitachi, Ltd. Computer system that enters control information by means of video camera
US6005548A (en) * 1996-08-14 1999-12-21 Latypov; Nurakhmed Nurislamovich Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods
US6030290A (en) * 1997-06-24 2000-02-29 Powell; Donald E Momentary contact motion switch for video games
US6098458A (en) * 1995-11-06 2000-08-08 Impulse Technology, Ltd. Testing and training system for assessing movement and agility skills without a confining field
US6430997B1 (en) * 1995-11-06 2002-08-13 Trazer Technologies, Inc. System and method for tracking and assessing movement skills in multidimensional space

Family Cites Families (95)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3909002A (en) 1970-04-02 1975-09-30 David Levy Data-processing system for determining gains and losses from bets
US3846826A (en) 1971-08-12 1974-11-05 R Mueller Direct television drawing and image manipulating system
JPS51112236A (en) 1975-03-28 1976-10-04 Hitachi Ltd Shape position recognizer unit
CA1021818A (en) 1976-10-01 1977-11-29 A. Marcel Giguere Apparatus for progressive and controlled physiotherapy of the human foot after an accident
US4305131A (en) 1979-02-05 1981-12-08 Best Robert M Dialog between TV movies and human viewers
US4339798A (en) 1979-12-17 1982-07-13 Remote Dynamics Remote gaming system
JPS56132505A (en) 1980-03-24 1981-10-16 Hitachi Ltd Position detecting method
US4484179A (en) 1980-04-16 1984-11-20 At&T Bell Laboratories Touch position sensitive surface
US4686374A (en) 1980-06-26 1987-08-11 Diffracto Ltd. Surface reflectivity detector with oil mist reflectivity enhancement
US4375674A (en) 1980-10-17 1983-03-01 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Kinesimetric method and apparatus
US4396945A (en) 1981-08-19 1983-08-02 Solid Photography Inc. Method of sensing the position and orientation of elements in space
US4475122A (en) 1981-11-09 1984-10-02 Tre Semiconductor Equipment Corporation Automatic wafer alignment technique
US4542375A (en) 1982-02-11 1985-09-17 At&T Bell Laboratories Deformable touch sensitive surface
US4613942A (en) 1982-02-19 1986-09-23 Chen Richard M Orientation and control system for robots
US6442465B2 (en) 1992-05-05 2002-08-27 Automotive Technologies International, Inc. Vehicular component control systems and methods
US4416924A (en) 1982-09-23 1983-11-22 Celanese Corporation Polycarbonate sizing finish and method of application thereof
US4631676A (en) 1983-05-25 1986-12-23 Hospital For Joint Diseases Or Computerized video gait and motion analysis system and method
US4654872A (en) * 1983-07-25 1987-03-31 Omron Tateisi Electronics Co. System for recognizing three-dimensional objects
GB2144582B (en) 1983-08-05 1987-06-10 Nintendo Co Ltd Multi-directional electrical switch
US4602280A (en) 1983-12-05 1986-07-22 Maloomian Laurence G Weight and/or measurement reduction preview system
US4629319A (en) 1984-02-14 1986-12-16 Diffracto Ltd. Panel surface flaw inspection
JPS62247410A (en) 1985-12-04 1987-10-28 Aisin Seiki Co Ltd Energization controller for electric apparatus
US4791589A (en) 1986-10-31 1988-12-13 Tektronix, Inc. Processing circuit for capturing event in digital camera system
JPS63167923A (en) * 1987-01-05 1988-07-12 Pfu Ltd Image data input device
US4988981B1 (en) * 1987-03-17 1999-05-18 Vpl Newco Inc Computer data entry and manipulation apparatus and method
JPH01158579A (en) 1987-09-09 1989-06-21 Aisin Seiki Co Ltd Image recognizing device
JPH0695008B2 (en) 1987-12-11 1994-11-24 株式会社東芝 Monitoring device
JPH02199526A (en) 1988-10-14 1990-08-07 David G Capper Control interface apparatus
US5088928A (en) 1988-11-15 1992-02-18 Chan James K Educational/board game apparatus
US5072294A (en) 1989-06-07 1991-12-10 Loredan Biomedical, Inc. Method and apparatus for analyzing a body having a marker located thereon
JPH06105217B2 (en) 1990-01-11 1994-12-21 倉敷紡績株式会社 Spectrometry
JP2854359B2 (en) 1990-01-24 1999-02-03 富士通株式会社 Image processing system
CA2040273C (en) 1990-04-13 1995-07-18 Kazu Horiuchi Image displaying system
US5566283A (en) 1990-09-03 1996-10-15 Dainippon Printing Co., Ltd. Computer graphic image storage, conversion and generating apparatus
US5249053A (en) 1991-02-05 1993-09-28 Dycam Inc. Filmless digital camera with selective image compression
US5444462A (en) 1991-12-16 1995-08-22 Wambach; Mark L. Computer mouse glove with remote communication
US5999185A (en) * 1992-03-30 1999-12-07 Kabushiki Kaisha Toshiba Virtual reality control using image, model and control data to manipulate interactions
US5982352A (en) 1992-09-18 1999-11-09 Pryor; Timothy R. Method for providing human input to a computer
JP3255995B2 (en) 1992-10-23 2002-02-12 株式会社日立製作所 Videophone equipment
US5376796A (en) 1992-11-25 1994-12-27 Adac Laboratories, Inc. Proximity detector for body contouring system of a medical camera
DE69432199T2 (en) * 1993-05-24 2004-01-08 Sun Microsystems, Inc., Mountain View Graphical user interface with methods for interfacing with remote control devices
US5365597A (en) 1993-06-11 1994-11-15 United Parcel Service Of America, Inc. Method and apparatus for passive autoranging using relaxation
JP3260216B2 (en) 1993-09-24 2002-02-25 旭光学工業株式会社 CCD digital camera system
JP2552427B2 (en) 1993-12-28 1996-11-13 コナミ株式会社 Tv play system
US5781650A (en) 1994-02-18 1998-07-14 University Of Central Florida Automatic feature detection and age classification of human faces in digital images
JPH07261920A (en) 1994-03-17 1995-10-13 Wacom Co Ltd Optical position detector and optical coordinate input device
JPH086708A (en) 1994-04-22 1996-01-12 Canon Inc Display device
JPH0816755A (en) * 1994-06-30 1996-01-19 Toshiba Corp Device and method for processing multimedia
NZ291950A (en) 1994-07-28 1998-06-26 Super Dimension Inc Computerised game board: location of toy figure sensed to actuate audio/visual display sequence
US5624117A (en) 1994-07-28 1997-04-29 Sugiyama Electron Co., Ltd. Game machine controller
US5999840A (en) 1994-09-01 1999-12-07 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets
US5926168A (en) 1994-09-30 1999-07-20 Fan; Nong-Qiang Remote pointers for interactive televisions
US5940126A (en) 1994-10-25 1999-08-17 Kabushiki Kaisha Toshiba Multiple image video camera apparatus
US5772522A (en) 1994-11-23 1998-06-30 United States Of Golf Association Method of and system for analyzing a golf club swing
US6727887B1 (en) * 1995-01-05 2004-04-27 International Business Machines Corporation Wireless pointing device for remote cursor control
US5682468A (en) 1995-01-23 1997-10-28 Intergraph Corporation OLE for design and modeling
US5717414A (en) * 1995-05-01 1998-02-10 Lockheed-Martin Tactical Defense Systems Video image tracking and mixing system
US5913727A (en) 1995-06-02 1999-06-22 Ahdoot; Ned Interactive movement and contact simulation game
JP3054681B2 (en) 1995-07-19 2000-06-19 工業技術院長 Image processing method
EP0852732A1 (en) * 1995-09-21 1998-07-15 Omniplanar, Inc. Method and apparatus for determining position and orientation
ATE278227T1 (en) 1995-10-05 2004-10-15 Digital Biometrics Inc GAME CHIP DETECTION SYSTEM
US6373472B1 (en) 1995-10-13 2002-04-16 Silviu Palalau Driver control interface system
US5966310A (en) 1996-02-13 1999-10-12 Sanyo Electric Co., Ltd. Personal design system and personal equipment production system for actually producing equipment having designed appearance
US5828770A (en) 1996-02-20 1998-10-27 Northern Digital Inc. System for determining the spatial position and angular orientation of an object
US5889505A (en) 1996-04-04 1999-03-30 Yale University Vision-based six-degree-of-freedom computer input device
JP3279479B2 (en) 1996-05-31 2002-04-30 株式会社日立国際電気 Video monitoring method and device
US6084979A (en) 1996-06-20 2000-07-04 Carnegie Mellon University Method for creating virtual reality
US6173068B1 (en) 1996-07-29 2001-01-09 Mikos, Ltd. Method and apparatus for recognizing and classifying individuals based on minutiae
US6954906B1 (en) * 1996-09-30 2005-10-11 Sony Corporation Image display processing apparatus that automatically changes position of sub-window relative to main window depending on distance at watch sub window is commanded to be displayed
US6057856A (en) * 1996-09-30 2000-05-02 Sony Corporation 3D virtual reality multi-user interaction with superimposed positional information display for each user
US5878174A (en) 1996-11-12 1999-03-02 Ford Global Technologies, Inc. Method for lens distortion correction of photographic images for texture mapping
US5870771A (en) 1996-11-15 1999-02-09 Oberg; Larry B. Computerized system for selecting, adjusting, and previewing framing product combinations for artwork and other items to be framed
US6148100A (en) 1996-12-20 2000-11-14 Bechtel Bwxt Idaho, Llc 3-dimensional telepresence system for a robotic environment
US5904484A (en) 1996-12-23 1999-05-18 Burns; Dave Interactive motion training device and method
US6049327A (en) 1997-04-23 2000-04-11 Modern Cartoons, Ltd System for data management based onhand gestures
US6252598B1 (en) 1997-07-03 2001-06-26 Lucent Technologies Inc. Video hand image computer interface
US6788336B1 (en) 1997-07-15 2004-09-07 Silverbrook Research Pty Ltd Digital camera with integral color printer and modular replaceable print roll
US6597817B1 (en) 1997-07-15 2003-07-22 Silverbrook Research Pty Ltd Orientation detection for digital cameras
KR19990011180A (en) 1997-07-22 1999-02-18 구자홍 How to select menu using image recognition
US20020036617A1 (en) * 1998-08-21 2002-03-28 Timothy R. Pryor Novel man machine interfaces and applications
US6750848B1 (en) 1998-11-09 2004-06-15 Timothy R. Pryor More useful man machine interfaces and applications
JPH11168549A (en) 1997-12-05 1999-06-22 Pioneer Electron Corp On-vehicle telephone set
US6342917B1 (en) 1998-01-16 2002-01-29 Xerox Corporation Image recording apparatus and method using light fields to track position and orientation
US6052132A (en) 1998-02-06 2000-04-18 Digital Equipment Corporation Technique for providing a computer generated face having coordinated eye and head movement
US6043805A (en) * 1998-03-24 2000-03-28 Hsieh; Kuan-Hong Controlling method for inputting messages to a computer
US6775361B1 (en) 1998-05-01 2004-08-10 Canon Kabushiki Kaisha Recording/playback apparatus with telephone and its control method, video camera with telephone and its control method, image communication apparatus, and storage medium
US6198485B1 (en) 1998-07-29 2001-03-06 Intel Corporation Method and apparatus for three-dimensional input entry
US6359647B1 (en) 1998-08-07 2002-03-19 Philips Electronics North America Corporation Automated camera handoff system for figure tracking in a multiple camera system
US6147678A (en) 1998-12-09 2000-11-14 Lucent Technologies Inc. Video hand image-three-dimensional computer interface with multiple degrees of freedom
US6204852B1 (en) 1998-12-09 2001-03-20 Lucent Technologies Inc. Video hand image three-dimensional computer interface
US6363160B1 (en) 1999-01-22 2002-03-26 Intel Corporation Interface using pattern recognition and tracking
US6508709B1 (en) 1999-06-18 2003-01-21 Jayant S. Karmarkar Virtual distributed multimedia gaming method and system based on actual regulated casino games
US6663491B2 (en) 2000-02-18 2003-12-16 Namco Ltd. Game apparatus, storage medium and computer program that adjust tempo of sound
GB2374266A (en) 2001-04-04 2002-10-09 Matsushita Comm Ind Uk Ltd Virtual user interface device
KR100600750B1 (en) 2004-07-27 2006-07-14 엘지전자 주식회사 Mobile Communication Terminal Having dual camera

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4146924A (en) * 1975-09-22 1979-03-27 Board Of Regents For Education Of The State Of Rhode Island System for visually determining position in space and/or orientation in space and apparatus employing same
US4219847A (en) * 1978-03-01 1980-08-26 Canadian Patents & Development Limited Method and apparatus of determining the center of area or centroid of a geometrical area of unspecified shape lying in a larger x-y scan field
US4631847A (en) * 1980-12-01 1986-12-30 Laurence Colin Encapsulated art
US5148591A (en) * 1981-05-11 1992-09-22 Sensor Adaptive Machines, Inc. Vision target based assembly
US5506682A (en) * 1982-02-16 1996-04-09 Sensor Adaptive Machines Inc. Robot vision using targets
US4654949A (en) * 1982-02-16 1987-04-07 Diffracto Ltd. Method for automatically handling, assembling and working on objects
US4672564A (en) * 1984-11-15 1987-06-09 Honeywell Inc. Method and apparatus for determining location and orientation of objects
US4843568A (en) * 1986-04-11 1989-06-27 Krueger Myron W Real time perception of and response to the actions of an unencumbered participant/user
US5161078A (en) * 1989-06-06 1992-11-03 U.S. Philips Corporation Magnetic tape system including a rotatable head support for rotating the magnetic head of the system through substantially 180°
US5168531A (en) * 1991-06-27 1992-12-01 Digital Equipment Corporation Real-time recognition of pointing information from video
US5581276A (en) * 1992-09-08 1996-12-03 Kabushiki Kaisha Toshiba 3D human interface apparatus using motion recognition based on dynamic image processing
US5388059A (en) * 1992-12-30 1995-02-07 University Of Maryland Computer vision system for accurate monitoring of object pose
US5297061A (en) * 1993-05-19 1994-03-22 University Of Maryland Three dimensional pointing device monitored by computer vision
US5454043A (en) * 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5617312A (en) * 1993-11-19 1997-04-01 Hitachi, Ltd. Computer system that enters control information by means of video camera
US5594469A (en) * 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
US6098458A (en) * 1995-11-06 2000-08-08 Impulse Technology, Ltd. Testing and training system for assessing movement and agility skills without a confining field
US6430997B1 (en) * 1995-11-06 2002-08-13 Trazer Technologies, Inc. System and method for tracking and assessing movement skills in multidimensional space
US6005548A (en) * 1996-08-14 1999-12-21 Latypov; Nurakhmed Nurislamovich Method for tracking and displaying user's spatial position and orientation, a method for representing virtual reality for a user, and systems of embodiment of such methods
US6030290A (en) * 1997-06-24 2000-02-29 Powell; Donald E Momentary contact motion switch for video games

Cited By (727)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9304593B2 (en) 1998-08-10 2016-04-05 Cybernet Systems Corporation Behavior recognition system
US7684592B2 (en) 1998-08-10 2010-03-23 Cybernet Systems Corporation Realtime object tracking system
US20090116692A1 (en) * 1998-08-10 2009-05-07 Paul George V Realtime object tracking system
US9468854B2 (en) 1999-02-26 2016-10-18 Mq Gaming, Llc Multi-platform gaming systems and methods
US8888576B2 (en) 1999-02-26 2014-11-18 Mq Gaming, Llc Multi-media interactive play system
US9731194B2 (en) 1999-02-26 2017-08-15 Mq Gaming, Llc Multi-platform gaming systems and methods
US9861887B1 (en) 1999-02-26 2018-01-09 Mq Gaming, Llc Multi-platform gaming systems and methods
US8758136B2 (en) 1999-02-26 2014-06-24 Mq Gaming, Llc Multi-platform gaming systems and methods
US10300374B2 (en) 1999-02-26 2019-05-28 Mq Gaming, Llc Multi-platform gaming systems and methods
US8654198B2 (en) 1999-05-11 2014-02-18 Timothy R. Pryor Camera based interaction and instruction
US10960283B2 (en) 1999-05-12 2021-03-30 Wilbert Quinc Murdock Smart system for display of dynamic movement parameters in sports and training
US10653964B2 (en) * 1999-05-12 2020-05-19 Wilbert Quinc Murdock Transmitting sensor data created in a game environment to a set of processors outside the game environment based on predefined event determinations
US20110066658A1 (en) * 1999-05-19 2011-03-17 Rhoads Geoffrey B Methods and Devices Employing Content Identifiers
US7536034B2 (en) 1999-05-19 2009-05-19 Digimarc Corporation Gestural use of wireless mobile phone devices to signal to remote systems
US8108484B2 (en) 1999-05-19 2012-01-31 Digimarc Corporation Fingerprints and machine-readable codes combined with user characteristics to obtain content or information
US8126200B2 (en) 1999-05-19 2012-02-28 Digimarc Corporation Methods and systems employing digital content
US8160304B2 (en) 1999-05-19 2012-04-17 Digimarc Corporation Interactive systems and methods employing wireless mobile devices
US20080019569A1 (en) * 1999-05-19 2008-01-24 Rhoads Geoffrey B Gestural Use of Wireless Mobile Phone Devices to Signal to Remote Systems
US20100185306A1 (en) * 1999-05-19 2010-07-22 Rhoads Geoffrey B Methods and Systems Employing Digital Content
US8543661B2 (en) 1999-05-19 2013-09-24 Digimarc Corporation Fingerprints and machine-readable codes combined with user characteristics to obtain content or information
US7406214B2 (en) 1999-05-19 2008-07-29 Digimarc Corporation Methods and devices employing optical sensors and/or steganography
US20050213790A1 (en) * 1999-05-19 2005-09-29 Rhoads Geoffrey B Methods for using wireless phones having optical capabilities
US8538064B2 (en) 1999-05-19 2013-09-17 Digimarc Corporation Methods and devices employing content identifiers
US8520900B2 (en) 1999-05-19 2013-08-27 Digimarc Corporation Methods and devices involving imagery and gestures
US20070250195A1 (en) * 1999-05-19 2007-10-25 Rhoads Geoffrey B Methods and Systems Employing Digital Content
US7565294B2 (en) 1999-05-19 2009-07-21 Digimarc Corporation Methods and systems employing digital content
US7174031B2 (en) 1999-05-19 2007-02-06 Digimarc Corporation Methods for using wireless phones having optical capabilities
US20080014917A1 (en) * 1999-06-29 2008-01-17 Rhoads Geoffrey B Wireless Mobile Phone Methods
US7760905B2 (en) 1999-06-29 2010-07-20 Digimarc Corporation Wireless mobile phone with content processing
US7050606B2 (en) * 1999-08-10 2006-05-23 Cybernet Systems Corporation Tracking and gesture recognition system particularly suited to vehicular control applications
US20070195997A1 (en) * 1999-08-10 2007-08-23 Paul George V Tracking and gesture recognition system particularly suited to vehicular control applications
US20020126876A1 (en) * 1999-08-10 2002-09-12 Paul George V. Tracking and gesture recognition system particularly suited to vehicular control applications
US20100062819A1 (en) * 1999-08-30 2010-03-11 Hannigan Brett T Methods and Related Toy and Game Applications Using Encoded Information
US8615471B2 (en) 1999-08-30 2013-12-24 Digimarc Corporation Methods and related toy and game applications using encoded information
US6593574B2 (en) * 1999-09-16 2003-07-15 Wayne State University Hand-held sound source gun for infrared imaging of sub-surface defects in materials
US7064332B2 (en) 1999-09-16 2006-06-20 Wayne State University Hand-held sound source for sonic infrared imaging of defects in materials
US20040245469A1 (en) * 1999-09-16 2004-12-09 Wayne State University Hand-held sound source for sonic infrared imaging of defects in materials
USRE43084E1 (en) 1999-10-29 2012-01-10 Smart Technologies Ulc Method and apparatus for inputting information including coordinate data
US8391851B2 (en) 1999-11-03 2013-03-05 Digimarc Corporation Gestural techniques with wireless mobile phone devices
US10304072B2 (en) 1999-12-03 2019-05-28 Nike, Inc. Interactive use and athletic performance monitoring and reward method, system, and computer program product
US10460337B2 (en) 1999-12-03 2019-10-29 Nike, Inc. Interactive use and athletic performance monitoring and reward method, system, and computer program product
US20050227811A1 (en) * 1999-12-03 2005-10-13 Nike, Inc. Game pod
US10282742B2 (en) 1999-12-03 2019-05-07 Nike, Inc. Interactive use and athletic performance monitoring and reward method, system, and computer program product
US8956228B2 (en) 1999-12-03 2015-02-17 Nike, Inc. Game pod
USRE42794E1 (en) 1999-12-27 2011-10-04 Smart Technologies Ulc Information-inputting device inputting contact point of object on recording surfaces as information
US7071918B2 (en) * 2000-01-28 2006-07-04 Hosiden Corporation Volume-integral type multi directional input apparatus
US20020186205A1 (en) * 2000-01-28 2002-12-12 Masahiko Nakamura Volume-integral type multi directional input apparatus
US20110081970A1 (en) * 2000-02-22 2011-04-07 Creative Kingdoms, Llc Systems and methods for providing interactive game play
US9138650B2 (en) 2000-02-22 2015-09-22 Mq Gaming, Llc Portable tracking device for entertainment purposes
US8686579B2 (en) 2000-02-22 2014-04-01 Creative Kingdoms, Llc Dual-range wireless controller
US9579568B2 (en) 2000-02-22 2017-02-28 Mq Gaming, Llc Dual-range wireless interactive entertainment device
US9149717B2 (en) 2000-02-22 2015-10-06 Mq Gaming, Llc Dual-range wireless interactive entertainment device
US8790180B2 (en) 2000-02-22 2014-07-29 Creative Kingdoms, Llc Interactive game and associated wireless toy
US8708821B2 (en) 2000-02-22 2014-04-29 Creative Kingdoms, Llc Systems and methods for providing interactive game play
US10188953B2 (en) 2000-02-22 2019-01-29 Mq Gaming, Llc Dual-range wireless interactive entertainment device
US9814973B2 (en) 2000-02-22 2017-11-14 Mq Gaming, Llc Interactive entertainment system
US9713766B2 (en) 2000-02-22 2017-07-25 Mq Gaming, Llc Dual-range wireless interactive entertainment device
US8814688B2 (en) 2000-02-22 2014-08-26 Creative Kingdoms, Llc Customizable toy for playing a wireless interactive game having both physical and virtual elements
US10307671B2 (en) 2000-02-22 2019-06-04 Mq Gaming, Llc Interactive entertainment system
US8538562B2 (en) 2000-03-07 2013-09-17 Motion Games, Llc Camera based interactive exercise
US7137711B1 (en) 2000-03-21 2006-11-21 Leonard Reiffel Multi-user retro reflector data input
US20060132433A1 (en) * 2000-04-17 2006-06-22 Virtual Technologies, Inc. Interface for controlling a graphical image
US7821498B2 (en) * 2000-04-17 2010-10-26 Immersion Corporation Interface for controlling a graphical image
US7000840B2 (en) 2000-05-03 2006-02-21 Leonard Reiffel Dual mode data imaging product
US6496812B1 (en) * 2000-05-13 2002-12-17 Object Power, Inc. Method and system for measuring and valuing contributions by group members to the achievement of a group goal
US7236162B2 (en) 2000-07-05 2007-06-26 Smart Technologies, Inc. Passive touch system and method of detecting user input
US8203535B2 (en) 2000-07-05 2012-06-19 Smart Technologies Ulc Passive touch system and method of detecting user input
US7755613B2 (en) 2000-07-05 2010-07-13 Smart Technologies Ulc Passive touch system and method of detecting user input
US8378986B2 (en) 2000-07-05 2013-02-19 Smart Technologies Ulc Passive touch system and method of detecting user input
US7692625B2 (en) 2000-07-05 2010-04-06 Smart Technologies Ulc Camera-based touch system
US20070075982A1 (en) * 2000-07-05 2007-04-05 Smart Technologies, Inc. Passive Touch System And Method Of Detecting User Input
US8055022B2 (en) 2000-07-05 2011-11-08 Smart Technologies Ulc Passive touch system and method of detecting user input
US20050088424A1 (en) * 2000-07-05 2005-04-28 Gerald Morrison Passive touch system and method of detecting user input
US20050077452A1 (en) * 2000-07-05 2005-04-14 Gerald Morrison Camera-based touch system
US20020022508A1 (en) * 2000-08-11 2002-02-21 Konami Corporation Fighting video game machine
US6918829B2 (en) * 2000-08-11 2005-07-19 Konami Corporation Fighting video game machine
US20040125224A1 (en) * 2000-08-18 2004-07-01 Leonard Reiffel Annotating imaged data product
US7034803B1 (en) 2000-08-18 2006-04-25 Leonard Reiffel Cursor display privacy product
US7161581B2 (en) 2000-08-18 2007-01-09 Leonard Reiffel Annotating imaged data product
US20020025061A1 (en) * 2000-08-23 2002-02-28 Leonard Metcalfe High speed and reliable determination of lumber quality using grain influenced distortion effects
US8040328B2 (en) 2000-10-11 2011-10-18 Peter Smith Books, papers, and downloaded information to facilitate human interaction with computers
US20080122805A1 (en) * 2000-10-11 2008-05-29 Peter Smith Books, papers, and downloaded information to facilitate human interaction with computers
US9320976B2 (en) 2000-10-20 2016-04-26 Mq Gaming, Llc Wireless toy systems and methods for interactive entertainment
US10307683B2 (en) 2000-10-20 2019-06-04 Mq Gaming, Llc Toy incorporating RFID tag
US8961260B2 (en) 2000-10-20 2015-02-24 Mq Gaming, Llc Toy incorporating RFID tracking device
US9931578B2 (en) 2000-10-20 2018-04-03 Mq Gaming, Llc Toy incorporating RFID tag
US20090124165A1 (en) * 2000-10-20 2009-05-14 Creative Kingdoms, Llc Wireless toy systems and methods for interactive entertainment
US9480929B2 (en) 2000-10-20 2016-11-01 Mq Gaming, Llc Toy incorporating RFID tag
US8753165B2 (en) 2000-10-20 2014-06-17 Mq Gaming, Llc Wireless toy systems and methods for interactive entertainment
US7342572B2 (en) * 2000-10-24 2008-03-11 Microsoft Corp. System and method for transforming an ordinary computer monitor into a touch screen
US20040207600A1 (en) * 2000-10-24 2004-10-21 Microsoft Corporation System and method for transforming an ordinary computer monitor into a touch screen
US20040032594A1 (en) * 2000-11-08 2004-02-19 Gerhard Weber Surface mapping and generating devices and methods for surface mapping and surface generation
US8922635B2 (en) 2000-11-08 2014-12-30 Institut Straumann Ag Surface mapping and generating devices and methods for surface mapping and surface generation
US7399181B2 (en) * 2000-11-08 2008-07-15 Aepsilon Gmbh Surface mapping and generating devices and methods for surface mapping and surface generation
US8026943B2 (en) 2000-11-08 2011-09-27 Institut Straumann Ag Surface mapping and generating devices and methods for surface mapping and surface generation
US8982201B2 (en) 2000-11-08 2015-03-17 Institut Straumann Ag Surface mapping and generating devices and methods for surface mapping and surface generation
US20080131833A1 (en) * 2000-11-08 2008-06-05 Gerhard Weber Surface mapping and generating devices and methods for surface mapping and surface generation
USRE44054E1 (en) 2000-12-08 2013-03-05 Ganz Graphic chatting with organizational avatars
US6945460B2 (en) 2000-12-15 2005-09-20 Leonard Reiffel Imaged coded data source transducer product
US20050102332A1 (en) * 2000-12-15 2005-05-12 Leonard Reiffel Multi-imager multi-source multi-use coded data source data iInput product
US20040041027A1 (en) * 2000-12-15 2004-03-04 Leonard Reiffel Imaged coded data source transducer product
US20040027455A1 (en) * 2000-12-15 2004-02-12 Leonard Reiffel Imaged coded data source tracking product
US7184075B2 (en) 2000-12-15 2007-02-27 Leonard Reiffel Imaged coded data source tracking product
US7099070B2 (en) 2000-12-15 2006-08-29 Leonard Reiffel Multi-imager multi-source multi-use coded data source data input product
US6774811B2 (en) * 2001-02-02 2004-08-10 International Business Machines Corporation Designation and opportunistic tracking of valuables
US20020147650A1 (en) * 2001-02-02 2002-10-10 International Business Machines Corporation Designation and opportunistic tracking of valuables
US9162148B2 (en) 2001-02-22 2015-10-20 Mq Gaming, Llc Wireless entertainment device, system, and method
US10179283B2 (en) 2001-02-22 2019-01-15 Mq Gaming, Llc Wireless entertainment device, system, and method
US8913011B2 (en) 2001-02-22 2014-12-16 Creative Kingdoms, Llc Wireless entertainment device, system, and method
US8711094B2 (en) 2001-02-22 2014-04-29 Creative Kingdoms, Llc Portable gaming device and gaming system combining both physical and virtual play elements
US9737797B2 (en) 2001-02-22 2017-08-22 Mq Gaming, Llc Wireless entertainment device, system, and method
US10758818B2 (en) 2001-02-22 2020-09-01 Mq Gaming, Llc Wireless entertainment device, system, and method
US9393491B2 (en) 2001-02-22 2016-07-19 Mq Gaming, Llc Wireless entertainment device, system, and method
US20090233769A1 (en) * 2001-03-07 2009-09-17 Timothy Pryor Motivation and enhancement of physical and mental exercise, rehabilitation, health and social interaction
US8306635B2 (en) * 2001-03-07 2012-11-06 Motion Games, Llc Motivation and enhancement of physical and mental exercise, rehabilitation, health and social interaction
US8892219B2 (en) 2001-03-07 2014-11-18 Motion Games, Llc Motivation and enhancement of physical and mental exercise, rehabilitation, health and social interaction
US20070187506A1 (en) * 2001-04-19 2007-08-16 Leonard Reiffel Combined imaging coded data source data acquisition
US7377438B2 (en) 2001-04-19 2008-05-27 Leonard Reiffel Combined imaging coded data source data acquisition
US20040125076A1 (en) * 2001-06-08 2004-07-01 David Green Method and apparatus for human interface with a computer
US6865447B2 (en) * 2001-06-14 2005-03-08 Sharper Image Corporation Robot capable of detecting an edge
US7024280B2 (en) 2001-06-14 2006-04-04 Sharper Image Corporation Robot capable of detecting an edge
US20040059467A1 (en) * 2001-06-14 2004-03-25 Sharper Image Corporation Robot capable of detecting an edge
US20030113018A1 (en) * 2001-07-18 2003-06-19 Nefian Ara Victor Dynamic gesture recognition from stereo sequences
US7274800B2 (en) * 2001-07-18 2007-09-25 Intel Corporation Dynamic gesture recognition from stereo sequences
US20040135766A1 (en) * 2001-08-15 2004-07-15 Leonard Reiffel Imaged toggled data input product
GB2379493A (en) * 2001-09-06 2003-03-12 4D Technology Systems Ltd Controlling an electronic device by detecting a handheld member with a camera
US20060182346A1 (en) * 2001-09-17 2006-08-17 National Inst. Of Adv. Industrial Science & Tech. Interface apparatus
US7680295B2 (en) * 2001-09-17 2010-03-16 National Institute Of Advanced Industrial Science And Technology Hand-gesture based interface apparatus
US7899221B2 (en) 2001-11-08 2011-03-01 Institut Straumann Ag Devices and methods for producing denture parts
US20050214716A1 (en) * 2001-11-08 2005-09-29 Willytech Gmbh Devices and methods for producing denture parts
US20050030296A1 (en) * 2002-03-29 2005-02-10 Xerox Corporation Tactile overlays for screens
US7184032B2 (en) * 2002-03-29 2007-02-27 Xerox Corporation Tactile overlays for screens
US10507387B2 (en) 2002-04-05 2019-12-17 Mq Gaming, Llc System and method for playing an interactive game
US8827810B2 (en) 2002-04-05 2014-09-09 Mq Gaming, Llc Methods for providing interactive entertainment
US8702515B2 (en) 2002-04-05 2014-04-22 Mq Gaming, Llc Multi-platform gaming system using RFID-tagged toys
US9272206B2 (en) 2002-04-05 2016-03-01 Mq Gaming, Llc System and method for playing an interactive game
US11278796B2 (en) 2002-04-05 2022-03-22 Mq Gaming, Llc Methods and systems for providing personalized interactive entertainment
US10478719B2 (en) 2002-04-05 2019-11-19 Mq Gaming, Llc Methods and systems for providing personalized interactive entertainment
US9463380B2 (en) 2002-04-05 2016-10-11 Mq Gaming, Llc System and method for playing an interactive game
US10010790B2 (en) 2002-04-05 2018-07-03 Mq Gaming, Llc System and method for playing an interactive game
US9616334B2 (en) 2002-04-05 2017-04-11 Mq Gaming, Llc Multi-platform gaming system using RFID-tagged toys
US20030189639A1 (en) * 2002-04-09 2003-10-09 Benoit Marchand Method and device for correcting the rotation of a video display
US7369144B2 (en) * 2002-04-09 2008-05-06 St Microelectronics Sa Method and device for correcting the rotation of a video display
US7165029B2 (en) 2002-05-09 2007-01-16 Intel Corporation Coupled hidden Markov model for audiovisual speech recognition
US20030212556A1 (en) * 2002-05-09 2003-11-13 Nefian Ara V. Factorial hidden markov model for audiovisual speech recognition
US20030212552A1 (en) * 2002-05-09 2003-11-13 Liang Lu Hong Face recognition procedure useful for audiovisual speech recognition
US7209883B2 (en) 2002-05-09 2007-04-24 Intel Corporation Factorial hidden markov model for audiovisual speech recognition
US7687744B2 (en) 2002-05-13 2010-03-30 S.C. Johnson & Son, Inc. Coordinated emission of fragrance, light, and sound
US20030227470A1 (en) * 2002-06-06 2003-12-11 Yakup Genc System and method for measuring the registration accuracy of an augmented reality system
US7190331B2 (en) * 2002-06-06 2007-03-13 Siemens Corporate Research, Inc. System and method for measuring the registration accuracy of an augmented reality system
EP1376317A2 (en) * 2002-06-19 2004-01-02 Seiko Epson Corporation Image/tactile information input device, image/tactile information input method, and image/tactile information input program
US20050099503A1 (en) * 2002-06-19 2005-05-12 Ikuaki Kitabayashi Image/tactile information input device, image/tactile information input method, and image/tactile information input program
EP1376317A3 (en) * 2002-06-19 2005-11-30 Seiko Epson Corporation Image/tactile information input device, image/tactile information input method, and image/tactile information input program
US7184030B2 (en) 2002-06-27 2007-02-27 Smart Technologies Inc. Synchronization of cameras in camera-based touch system to enhance position determination of fast moving objects
US8035629B2 (en) 2002-07-18 2011-10-11 Sony Computer Entertainment Inc. Hand-held computer interactive device
US20070075966A1 (en) * 2002-07-18 2007-04-05 Sony Computer Entertainment Inc. Hand-held computer interactive device
US9682320B2 (en) 2002-07-22 2017-06-20 Sony Interactive Entertainment Inc. Inertially trackable hand-held controller
US8303405B2 (en) 2002-07-27 2012-11-06 Sony Computer Entertainment America Llc Controller for providing inputs to control execution of a program when inputs are combined
US7854655B2 (en) * 2002-07-27 2010-12-21 Sony Computer Entertainment America Inc. Obtaining input for controlling execution of a game program
US9474968B2 (en) * 2002-07-27 2016-10-25 Sony Interactive Entertainment America Llc Method and system for applying gearing effects to visual tracking
US10220302B2 (en) 2002-07-27 2019-03-05 Sony Interactive Entertainment Inc. Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US10086282B2 (en) 2002-07-27 2018-10-02 Sony Interactive Entertainment Inc. Tracking device for use in obtaining information for controlling game program execution
US8797260B2 (en) * 2002-07-27 2014-08-05 Sony Computer Entertainment Inc. Inertially trackable hand-held controller
US8686939B2 (en) * 2002-07-27 2014-04-01 Sony Computer Entertainment Inc. System, method, and apparatus for three-dimensional input control
US20070021208A1 (en) * 2002-07-27 2007-01-25 Xiadong Mao Obtaining input for controlling execution of a game program
US20060282873A1 (en) * 2002-07-27 2006-12-14 Sony Computer Entertainment Inc. Hand-held controller having detectable elements for tracking purposes
US20060287086A1 (en) * 2002-07-27 2006-12-21 Sony Computer Entertainment America Inc. Scheme for translating movements of a hand-held controller into inputs for a system
US9174119B2 (en) 2002-07-27 2015-11-03 Sony Computer Entertainement America, LLC Controller for providing inputs to control execution of a program when inputs are combined
US20060287087A1 (en) * 2002-07-27 2006-12-21 Sony Computer Entertainment America Inc. Method for mapping movements of a hand-held controller to game commands
US20080220867A1 (en) * 2002-07-27 2008-09-11 Sony Computer Entertainment Inc. Methods and systems for applying gearing effects to actions based on input data
US8313380B2 (en) 2002-07-27 2012-11-20 Sony Computer Entertainment America Llc Scheme for translating movements of a hand-held controller into inputs for a system
US20060264260A1 (en) * 2002-07-27 2006-11-23 Sony Computer Entertainment Inc. Detectable and trackable hand-held controller
US20080274804A1 (en) * 2002-07-27 2008-11-06 Sony Computer Entertainment America Inc. Method and system for adding a new player to a game in response to controller activity
US7737944B2 (en) * 2002-07-27 2010-06-15 Sony Computer Entertainment America Inc. Method and system for adding a new player to a game in response to controller activity
US8570378B2 (en) 2002-07-27 2013-10-29 Sony Computer Entertainment Inc. Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US20060287084A1 (en) * 2002-07-27 2006-12-21 Xiadong Mao System, method, and apparatus for three-dimensional input control
US20090122146A1 (en) * 2002-07-27 2009-05-14 Sony Computer Entertainment Inc. Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera
US10406433B2 (en) 2002-07-27 2019-09-10 Sony Interactive Entertainment America Llc Method and system for applying gearing effects to visual tracking
US20060287085A1 (en) * 2002-07-27 2006-12-21 Xiadong Mao Inertially trackable hand-held controller
US20070015558A1 (en) * 2002-07-27 2007-01-18 Sony Computer Entertainment America Inc. Method and apparatus for use in determining an activity level of a user in relation to a system
US20060252541A1 (en) * 2002-07-27 2006-11-09 Sony Computer Entertainment Inc. Method and system for applying gearing effects to visual tracking
US8976265B2 (en) 2002-07-27 2015-03-10 Sony Computer Entertainment Inc. Apparatus for image and sound capture in a game environment
US20060274032A1 (en) * 2002-07-27 2006-12-07 Xiadong Mao Tracking device for use in obtaining information for controlling game program execution
US9381424B2 (en) 2002-07-27 2016-07-05 Sony Interactive Entertainment America Llc Scheme for translating movements of a hand-held controller into inputs for a system
US20060256081A1 (en) * 2002-07-27 2006-11-16 Sony Computer Entertainment America Inc. Scheme for detecting and tracking user manipulation of a game controller body
US9393487B2 (en) * 2002-07-27 2016-07-19 Sony Interactive Entertainment Inc. Method for mapping movements of a hand-held controller to game commands
US10099130B2 (en) 2002-07-27 2018-10-16 Sony Interactive Entertainment America Llc Method and system for applying gearing effects to visual tracking
US7782297B2 (en) 2002-07-27 2010-08-24 Sony Computer Entertainment America Inc. Method and apparatus for use in determining an activity level of a user in relation to a system
US20080009348A1 (en) * 2002-07-31 2008-01-10 Sony Computer Entertainment Inc. Combiner method for altering game gearing
US9682319B2 (en) 2002-07-31 2017-06-20 Sony Interactive Entertainment Inc. Combiner method for altering game gearing
WO2004019271A3 (en) * 2002-08-22 2004-05-21 Schick Technologies Inc Intra-oral camera coupled directly and independently to a computer
WO2004019271A2 (en) * 2002-08-22 2004-03-04 Schick Technologies, Inc. Intra-oral camera coupled directly and independently to a computer
US7002551B2 (en) * 2002-09-25 2006-02-21 Hrl Laboratories, Llc Optical see-through augmented reality modified-scale display
US20040051680A1 (en) * 2002-09-25 2004-03-18 Azuma Ronald T. Optical see-through augmented reality modified-scale display
US7171043B2 (en) 2002-10-11 2007-01-30 Intel Corporation Image recognition using hidden markov models and coupled hidden markov models
US7289645B2 (en) 2002-10-25 2007-10-30 Mitsubishi Fuso Truck And Bus Corporation Hand pattern switch device
US20040141634A1 (en) * 2002-10-25 2004-07-22 Keiichi Yamamoto Hand pattern switch device
US10058774B2 (en) 2002-10-30 2018-08-28 Nike, Inc. Sigils for use with apparel
US10238959B2 (en) 2002-10-30 2019-03-26 Nike, Inc. Interactive gaming apparel for interactive gaming
US20100137064A1 (en) * 2002-10-30 2010-06-03 Nike, Inc. Sigils for Use with Apparel
US10864435B2 (en) 2002-10-30 2020-12-15 Nike, Inc. Sigils for use with apparel
EP2163286A3 (en) * 2002-10-30 2011-06-29 Nike International Ltd. Clothes with tracking marks for computer games
US9597598B2 (en) 2002-10-30 2017-03-21 Nike, Inc. Sigils for use with apparel
US9162142B2 (en) 2002-10-30 2015-10-20 Nike, Inc. Sigils for use with apparel
US9517406B2 (en) 2002-10-30 2016-12-13 Nike, Inc. Interactive gaming apparel for interactive gaming
US8206219B2 (en) 2002-10-30 2012-06-26 Nike, Inc. Interactive gaming apparel for interactive gaming
US20040087378A1 (en) * 2002-11-01 2004-05-06 Poe Lang Enterprise Co., Ltd. Shooting exercise for simultaneous multiple shooters
US20080065291A1 (en) * 2002-11-04 2008-03-13 Automotive Technologies International, Inc. Gesture-Based Control of Vehicular Components
US20060022962A1 (en) * 2002-11-15 2006-02-02 Gerald Morrison Size/scale and orientation determination of a pointer in a camera-based touch system
US8228304B2 (en) 2002-11-15 2012-07-24 Smart Technologies Ulc Size/scale orientation determination of a pointer in a camera-based touch system
SG134976A1 (en) * 2002-11-26 2007-09-28 Sony Corp Data input to a computer system defining a three-dimensional model
US20040122675A1 (en) * 2002-12-19 2004-06-24 Nefian Ara Victor Visual feature extraction procedure useful for audiovisual continuous speech recognition
US7472063B2 (en) 2002-12-19 2008-12-30 Intel Corporation Audio-visual feature fusion and support vector machine useful for continuous speech recognition
US20040131259A1 (en) * 2003-01-06 2004-07-08 Nefian Ara V. Embedded bayesian network for pattern recognition
US7203368B2 (en) 2003-01-06 2007-04-10 Intel Corporation Embedded bayesian network for pattern recognition
US7932482B2 (en) 2003-02-07 2011-04-26 S.C. Johnson & Son, Inc. Diffuser with light emitting diode nightlight
US20040155962A1 (en) * 2003-02-11 2004-08-12 Marks Richard L. Method and apparatus for real time motion capture
US9177387B2 (en) 2003-02-11 2015-11-03 Sony Computer Entertainment Inc. Method and apparatus for real time motion capture
US10410359B2 (en) * 2003-02-11 2019-09-10 Sony Interactive Entertainment Inc. Methods for capturing images of markers of a person to control interfacing with an application
US8508508B2 (en) 2003-02-14 2013-08-13 Next Holdings Limited Touch screen signal processing with single-point calibration
US8466885B2 (en) 2003-02-14 2013-06-18 Next Holdings Limited Touch screen signal processing
US20100090985A1 (en) * 2003-02-14 2010-04-15 Next Holdings Limited Touch screen signal processing
US8289299B2 (en) 2003-02-14 2012-10-16 Next Holdings Limited Touch screen signal processing
US8456447B2 (en) 2003-02-14 2013-06-04 Next Holdings Limited Touch screen signal processing
US8456451B2 (en) 2003-03-11 2013-06-04 Smart Technologies Ulc System and method for differentiating between pointers used to contact touch surface
US7532206B2 (en) 2003-03-11 2009-05-12 Smart Technologies Ulc System and method for differentiating between pointers used to contact touch surface
US20040179001A1 (en) * 2003-03-11 2004-09-16 Morrison Gerald D. System and method for differentiating between pointers used to contact touch surface
EP1601783A4 (en) * 2003-03-13 2008-11-05 Sony Pictures Entertainment Wheel motion control input device for animation system
EP1601783A2 (en) * 2003-03-13 2005-12-07 Sony Pictures Entertainment Inc. Wheel motion control input device for animation system
US7805220B2 (en) 2003-03-14 2010-09-28 Sharper Image Acquisition Llc Robot vacuum with internal mapping system
US9993724B2 (en) 2003-03-25 2018-06-12 Mq Gaming, Llc Interactive gaming toy
US10369463B2 (en) 2003-03-25 2019-08-06 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US9770652B2 (en) 2003-03-25 2017-09-26 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US10022624B2 (en) 2003-03-25 2018-07-17 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US9707478B2 (en) 2003-03-25 2017-07-18 Mq Gaming, Llc Motion-sensitive controller and associated gaming applications
US9446319B2 (en) 2003-03-25 2016-09-20 Mq Gaming, Llc Interactive gaming toy
US9393500B2 (en) 2003-03-25 2016-07-19 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US9039533B2 (en) 2003-03-25 2015-05-26 Creative Kingdoms, Llc Wireless interactive game having both physical and virtual elements
US11052309B2 (en) 2003-03-25 2021-07-06 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US10583357B2 (en) 2003-03-25 2020-03-10 Mq Gaming, Llc Interactive gaming toy
US10551930B2 (en) * 2003-03-25 2020-02-04 Microsoft Technology Licensing, Llc System and method for executing a process using accelerometer signals
US8961312B2 (en) 2003-03-25 2015-02-24 Creative Kingdoms, Llc Motion-sensitive controller and associated gaming applications
JP2006522967A (en) * 2003-04-08 2006-10-05 スマート テクノロジーズ インコーポレイテッド Automatic alignment touch system and method
EP2287709A1 (en) * 2003-04-08 2011-02-23 SMART Technologies ULC Auto-aligning touch system and method
CN100465865C (en) * 2003-04-08 2009-03-04 智能技术公司 Auto-aligning touch system and method
JP4820285B2 (en) * 2003-04-08 2011-11-24 スマート テクノロジーズ ユーエルシー Automatic alignment touch system and method
US7256772B2 (en) 2003-04-08 2007-08-14 Smart Technologies, Inc. Auto-aligning touch system and method
WO2004090706A3 (en) * 2003-04-08 2005-01-13 Smart Technologies Inc Auto-aligning touch system and method
WO2004090706A2 (en) * 2003-04-08 2004-10-21 Smart Technologies Inc. Auto-aligning touch system and method
US20040201575A1 (en) * 2003-04-08 2004-10-14 Morrison Gerald D. Auto-aligning touch system and method
US20070098250A1 (en) * 2003-05-01 2007-05-03 Delta Dansk Elektronik, Lys Og Akustik Man-machine interface based on 3-D positions of the human body
US20050125826A1 (en) * 2003-05-08 2005-06-09 Hunleth Frank A. Control framework with a zoomable graphical user interface for organizing selecting and launching media items
US7834849B2 (en) * 2003-05-08 2010-11-16 Hillcrest Laboratories, Inc. Control framework with a zoomable graphical user interface for organizing selecting and launching media items
US20040233223A1 (en) * 2003-05-22 2004-11-25 Steven Schkolne Physical/digital input methodologies for spatial manipulations and entertainment
US20060291797A1 (en) * 2003-05-27 2006-12-28 Leonard Reiffel Multi-imager multi-source multi-use coded data source data input product
US8072470B2 (en) 2003-05-29 2011-12-06 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
US20040239670A1 (en) * 2003-05-29 2004-12-02 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
US11010971B2 (en) 2003-05-29 2021-05-18 Sony Interactive Entertainment Inc. User-driven three-dimensional interactive gaming environment
US9427658B2 (en) 2003-07-02 2016-08-30 Ganz Interactive action figures for gaming systems
US8734242B2 (en) 2003-07-02 2014-05-27 Ganz Interactive action figures for gaming systems
US9132344B2 (en) 2003-07-02 2015-09-15 Ganz Interactive action figures for gaming system
US8636588B2 (en) 2003-07-02 2014-01-28 Ganz Interactive action figures for gaming systems
US10112114B2 (en) 2003-07-02 2018-10-30 Ganz Interactive action figures for gaming systems
US7862428B2 (en) 2003-07-02 2011-01-04 Ganz Interactive action figures for gaming systems
US8585497B2 (en) 2003-07-02 2013-11-19 Ganz Interactive action figures for gaming systems
US20100151940A1 (en) * 2003-07-02 2010-06-17 Ganz Interactive action figures for gaming systems
US20090054155A1 (en) * 2003-07-02 2009-02-26 Ganz Interactive action figures for gaming systems
US9199153B2 (en) 2003-07-30 2015-12-01 Interactive Sports Technologies Inc. Golf simulation system with reflective projectile marking
US7544137B2 (en) 2003-07-30 2009-06-09 Richardson Todd E Sports simulation system
US9381398B2 (en) 2003-07-30 2016-07-05 Interactive Sports Technologies Inc. Sports simulation system
US20050023763A1 (en) * 2003-07-30 2005-02-03 Richardson Todd E. Sports simulation system
US9649545B2 (en) 2003-07-30 2017-05-16 Interactive Sports Technologies Inc. Golf simulation system with reflective projectile marking
US20060063574A1 (en) * 2003-07-30 2006-03-23 Richardson Todd E Sports simulation system
US20050063564A1 (en) * 2003-08-11 2005-03-24 Keiichi Yamamoto Hand pattern switch device
US8947347B2 (en) 2003-08-27 2015-02-03 Sony Computer Entertainment Inc. Controlling actions in a video game unit
US20060239471A1 (en) * 2003-08-27 2006-10-26 Sony Computer Entertainment Inc. Methods and apparatus for targeted sound detection and characterization
US7643006B2 (en) * 2003-09-16 2010-01-05 Smart Technologies Ulc Gesture recognition method and touch system incorporating the same
US20080297471A1 (en) * 2003-09-16 2008-12-04 Smart Technologies Ulc Gesture recognition method and touch system incorporating the same
US7411575B2 (en) 2003-09-16 2008-08-12 Smart Technologies Ulc Gesture recognition method and touch system incorporating the same
US8456418B2 (en) 2003-10-09 2013-06-04 Smart Technologies Ulc Apparatus for determining the location of a pointer within a region of interest
US20080146303A1 (en) * 2003-10-23 2008-06-19 Hiromu Ueshima Game for moving an object on a screen in response to movement of an operation article
US7235280B2 (en) 2003-11-12 2007-06-26 Srs Technologies, Inc. Non-intrusive photogrammetric targets
EP1698964A4 (en) * 2003-11-25 2010-03-31 Kenji Nishi Information input unit, storing unit, information input device, and information processing device
EP1698964A1 (en) * 2003-11-25 2006-09-06 Kenji Nishi Information input unit, storing unit, information input device, and information processing device
US20110167267A1 (en) * 2003-12-31 2011-07-07 Ganz System and method for toy adoption and marketing
US8900030B2 (en) 2003-12-31 2014-12-02 Ganz System and method for toy adoption and marketing
US20110092128A1 (en) * 2003-12-31 2011-04-21 Ganz System and method for toy adoption and marketing
US9238171B2 (en) 2003-12-31 2016-01-19 Howard Ganz System and method for toy adoption and marketing
US8465338B2 (en) 2003-12-31 2013-06-18 Ganz System and method for toy adoption and marketing
US8641471B2 (en) 2003-12-31 2014-02-04 Ganz System and method for toy adoption and marketing
US20080109313A1 (en) * 2003-12-31 2008-05-08 Ganz System and method for toy adoption and marketing
US7789726B2 (en) 2003-12-31 2010-09-07 Ganz System and method for toy adoption and marketing
US7967657B2 (en) 2003-12-31 2011-06-28 Ganz System and method for toy adoption and marketing
US20090063282A1 (en) * 2003-12-31 2009-03-05 Ganz System and method for toy adoption and marketing
US20090131164A1 (en) * 2003-12-31 2009-05-21 Ganz System and method for toy adoption and marketing
US20110161093A1 (en) * 2003-12-31 2011-06-30 Ganz System and method for toy adoption and marketing
US9610513B2 (en) 2003-12-31 2017-04-04 Ganz System and method for toy adoption and marketing
US20110167485A1 (en) * 2003-12-31 2011-07-07 Ganz System and method for toy adoption and marketing
US20110167481A1 (en) * 2003-12-31 2011-07-07 Ganz System and method for toy adoption and marketing
US20110184797A1 (en) * 2003-12-31 2011-07-28 Ganz System and method for toy adoption and marketing
US20110190047A1 (en) * 2003-12-31 2011-08-04 Ganz System and method for toy adoption and marketing
US11443339B2 (en) 2003-12-31 2022-09-13 Ganz System and method for toy adoption and marketing
US8292688B2 (en) 2003-12-31 2012-10-23 Ganz System and method for toy adoption and marketing
US9947023B2 (en) 2003-12-31 2018-04-17 Ganz System and method for toy adoption and marketing
US8002605B2 (en) 2003-12-31 2011-08-23 Ganz System and method for toy adoption and marketing
US8500511B2 (en) 2003-12-31 2013-08-06 Ganz System and method for toy adoption and marketing
US8317566B2 (en) 2003-12-31 2012-11-27 Ganz System and method for toy adoption and marketing
US9721269B2 (en) 2003-12-31 2017-08-01 Ganz System and method for toy adoption and marketing
US20080009350A1 (en) * 2003-12-31 2008-01-10 Ganz System and method for toy adoption marketing
US10657551B2 (en) 2003-12-31 2020-05-19 Ganz System and method for toy adoption and marketing
US8408963B2 (en) 2003-12-31 2013-04-02 Ganz System and method for toy adoption and marketing
US7568964B2 (en) 2003-12-31 2009-08-04 Ganz System and method for toy adoption and marketing
US8814624B2 (en) 2003-12-31 2014-08-26 Ganz System and method for toy adoption and marketing
US8808053B2 (en) 2003-12-31 2014-08-19 Ganz System and method for toy adoption and marketing
US8549440B2 (en) 2003-12-31 2013-10-01 Ganz System and method for toy adoption and marketing
US8777687B2 (en) 2003-12-31 2014-07-15 Ganz System and method for toy adoption and marketing
US8460052B2 (en) 2003-12-31 2013-06-11 Ganz System and method for toy adoption and marketing
US20080009351A1 (en) * 2003-12-31 2008-01-10 Ganz System and method for toy adoption marketing
US7355593B2 (en) 2004-01-02 2008-04-08 Smart Technologies, Inc. Pointer tracking across multiple overlapping coordinate input sub-regions defining a generally contiguous input region
US8089462B2 (en) 2004-01-02 2012-01-03 Smart Technologies Ulc Pointer tracking across multiple overlapping coordinate input sub-regions defining a generally contiguous input region
US20080284733A1 (en) * 2004-01-02 2008-11-20 Smart Technologies Inc. Pointer tracking across multiple overlapping coordinate input sub-regions defining a generally contiguous input region
EP1704465B1 (en) * 2004-01-16 2016-04-27 Sony Computer Entertainment Inc. Method and apparatus for light input device
WO2005073838A2 (en) 2004-01-16 2005-08-11 Sony Computer Entertainment Inc. Method and apparatus for light input device
US20050166163A1 (en) * 2004-01-23 2005-07-28 Chang Nelson L.A. Systems and methods of interfacing with a machine
US7755608B2 (en) 2004-01-23 2010-07-13 Hewlett-Packard Development Company, L.P. Systems and methods of interfacing with a machine
US20140325459A1 (en) * 2004-02-06 2014-10-30 Nokia Corporation Gesture control system
US20080068352A1 (en) * 2004-02-17 2008-03-20 Smart Technologies Inc. Apparatus for detecting a pointer within a region of interest
US7232986B2 (en) 2004-02-17 2007-06-19 Smart Technologies Inc. Apparatus for detecting a pointer within a region of interest
US20050178953A1 (en) * 2004-02-17 2005-08-18 Stephen Worthington Apparatus for detecting a pointer within a region of interest
US7499569B2 (en) 2004-02-26 2009-03-03 Mitsubishi Fuso Truck And Bus Corporation Hand pattern switching apparatus
US20050238202A1 (en) * 2004-02-26 2005-10-27 Mitsubishi Fuso Truck And Bus Corporation Hand pattern switching apparatus
US20050192185A1 (en) * 2004-02-27 2005-09-01 Saathoff Lee D. Power transmission fluids
EP1575169A2 (en) * 2004-03-10 2005-09-14 ABB PATENT GmbH Proximity switch with signal processing system
EP1575169A3 (en) * 2004-03-10 2005-11-09 ABB PATENT GmbH Proximity switch with signal processing system
US20060020369A1 (en) * 2004-03-11 2006-01-26 Taylor Charles E Robot vacuum cleaner
US7728852B2 (en) * 2004-03-31 2010-06-01 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US20050231532A1 (en) * 2004-03-31 2005-10-20 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US7460110B2 (en) 2004-04-29 2008-12-02 Smart Technologies Ulc Dual mode touch system
US8274496B2 (en) 2004-04-29 2012-09-25 Smart Technologies Ulc Dual mode touch systems
US20090146973A1 (en) * 2004-04-29 2009-06-11 Smart Technologies Ulc Dual mode touch systems
US20090146972A1 (en) * 2004-05-05 2009-06-11 Smart Technologies Ulc Apparatus and method for detecting a pointer relative to a touch surface
US7492357B2 (en) 2004-05-05 2009-02-17 Smart Technologies Ulc Apparatus and method for detecting a pointer relative to a touch surface
US8149221B2 (en) 2004-05-07 2012-04-03 Next Holdings Limited Touch panel display system with illumination and detection provided from a single edge
US8120596B2 (en) 2004-05-21 2012-02-21 Smart Technologies Ulc Tiled touch system
DE102004025516A1 (en) * 2004-05-21 2005-12-29 X3D Technologies Gmbh Object, e.g. user`s, fingertip, position determining arrangement ,for use in room, has opto-electronic camera with visible mirror and evaluation unit determining position of object based on signal generated by camera
US9230395B2 (en) 2004-06-18 2016-01-05 Igt Control of wager-based game using gesture recognition
US20070259716A1 (en) * 2004-06-18 2007-11-08 Igt Control of wager-based game using gesture recognition
US8684839B2 (en) 2004-06-18 2014-04-01 Igt Control of wager-based game using gesture recognition
US9798391B2 (en) 2004-06-18 2017-10-24 Igt Control of wager-based game using gesture recognition
US20080133640A1 (en) * 2004-07-27 2008-06-05 Sony Corporation Information Processing Device and Method, Recording Medium, and Program
US8099460B2 (en) * 2004-07-27 2012-01-17 Sony Corporation Information processing device and method, recording medium, and program
US8856231B2 (en) * 2004-07-27 2014-10-07 Sony Corporation Information processing device and method, recording medium, and program
US20120110081A1 (en) * 2004-07-27 2012-05-03 Sony Corporation Information processing device and method, recording medium, and program
US20060125799A1 (en) * 2004-08-06 2006-06-15 Hillis W D Touch driven method and apparatus to integrate and display multiple image layers forming alternate depictions of same subject matter
US8188985B2 (en) 2004-08-06 2012-05-29 Touchtable, Inc. Method and apparatus continuing action of user gestures performed upon a touch sensitive interactive display in simulation of inertia
US8665239B2 (en) 2004-08-06 2014-03-04 Qualcomm Incorporated Method and apparatus continuing action of user gestures performed upon a touch sensitive interactive display in simulation of inertia
US8269739B2 (en) 2004-08-06 2012-09-18 Touchtable, Inc. Touch driven method and apparatus to integrate and display multiple image layers forming alternate depictions of same subject matter
US10073610B2 (en) 2004-08-06 2018-09-11 Qualcomm Incorporated Bounding box gesture recognition on a touch detecting interactive display
US8624863B2 (en) 2004-08-06 2014-01-07 Qualcomm Incorporated Touch driven method and apparatus to integrate and display multiple image layers forming alternate depictions of same subject matter
US8669958B2 (en) 2004-08-06 2014-03-11 Qualcomm Incorporated Method and apparatus continuing action of user gestures performed upon a touch sensitive interactive display in simulation of inertia
US7907124B2 (en) * 2004-08-06 2011-03-15 Touchtable, Inc. Method and apparatus continuing action of user gestures performed upon a touch sensitive interactive display in simulation of inertia
US20060288313A1 (en) * 2004-08-06 2006-12-21 Hillis W D Bounding box gesture recognition on a touch detecting interactive display
US8139043B2 (en) 2004-08-06 2012-03-20 Touchtable, Inc. Bounding box gesture recognition on a touch detecting interactive display
US20100318904A1 (en) * 2004-08-06 2010-12-16 Touchtable, Inc. Method and apparatus continuing action of user gestures performed upon a touch sensitive interactive display in simulation of inertia
US20060031786A1 (en) * 2004-08-06 2006-02-09 Hillis W D Method and apparatus continuing action of user gestures performed upon a touch sensitive interactive display in simulation of inertia
US20100117979A1 (en) * 2004-08-06 2010-05-13 Touchtable, Inc. Bounding box gesture recognition on a touch detecting interactive display
US8692792B2 (en) 2004-08-06 2014-04-08 Qualcomm Incorporated Bounding box gesture recognition on a touch detecting interactive display
US7728821B2 (en) 2004-08-06 2010-06-01 Touchtable, Inc. Touch detecting interactive display
US7719523B2 (en) 2004-08-06 2010-05-18 Touchtable, Inc. Bounding box gesture recognition on a touch detecting interactive display
US7724242B2 (en) 2004-08-06 2010-05-25 Touchtable, Inc. Touch driven method and apparatus to integrate and display multiple image layers forming alternate depictions of same subject matter
US20070046643A1 (en) * 2004-08-06 2007-03-01 Hillis W Daniel State-Based Approach to Gesture Identification
US20060274046A1 (en) * 2004-08-06 2006-12-07 Hillis W D Touch detecting interactive display
US8072439B2 (en) 2004-08-06 2011-12-06 Touchtable, Inc. Touch detecting interactive display
US20100039446A1 (en) * 2004-08-06 2010-02-18 Applied Minds, Inc. Touch driven method and apparatus to integrate and display multiple image layers forming alternate depictions of same subject matter
US7542040B2 (en) * 2004-08-11 2009-06-02 The United States Of America As Represented By The Secretary Of The Navy Simulated locomotion method and apparatus
US20070003915A1 (en) * 2004-08-11 2007-01-04 Templeman James N Simulated locomotion method and apparatus
US20060038833A1 (en) * 2004-08-19 2006-02-23 Mallinson Dominic S Portable augmented reality device and method
US10099147B2 (en) 2004-08-19 2018-10-16 Sony Interactive Entertainment Inc. Using a portable device to interface with a video game rendered on a main display
US9116543B2 (en) 2004-08-19 2015-08-25 Iii Holdings 1, Llc Virtual input system
US8547401B2 (en) 2004-08-19 2013-10-01 Sony Computer Entertainment Inc. Portable augmented reality device and method
US10564776B2 (en) 2004-08-19 2020-02-18 American Patents Llc Virtual input system
US8668584B2 (en) 2004-08-19 2014-03-11 Igt Virtual input system
US9606674B2 (en) 2004-08-19 2017-03-28 Iii Holdings 1, Llc Virtual input system
US20130252214A1 (en) * 2004-09-27 2013-09-26 123 Certification, Inc. Body motion training and qualification system and method
US9675878B2 (en) 2004-09-29 2017-06-13 Mq Gaming, Llc System and method for playing a virtual game by sensing physical movements
US20100070473A1 (en) * 2004-12-15 2010-03-18 Swett Ian Distributed Data Store with a Designated Master to Ensure Consistency
US8275804B2 (en) 2004-12-15 2012-09-25 Applied Minds, Llc Distributed data store with a designated master to ensure consistency
US20070011159A1 (en) * 2004-12-15 2007-01-11 Hillis W Daniel Distributed data store with an orderstamp to ensure progress
US10552496B2 (en) 2004-12-15 2020-02-04 Applied Invention, Llc Data store with lock-free stateless paging capacity
US11727072B2 (en) 2004-12-15 2023-08-15 Applied Invention, Llc Data store with lock-free stateless paging capacity
US8131781B2 (en) * 2004-12-15 2012-03-06 Applied Minds, Llc Anti-item for deletion of content in a distributed datastore
US11321408B2 (en) 2004-12-15 2022-05-03 Applied Invention, Llc Data store with lock-free stateless paging capacity
US20060129540A1 (en) * 2004-12-15 2006-06-15 Hillis W D Data store with lock-free stateless paging capability
US7590635B2 (en) 2004-12-15 2009-09-15 Applied Minds, Inc. Distributed data store with an orderstamp to ensure progress
US8719313B2 (en) 2004-12-15 2014-05-06 Applied Minds, Llc Distributed data store with a designated master to ensure consistency
US8996486B2 (en) 2004-12-15 2015-03-31 Applied Invention, Llc Data store with lock-free stateless paging capability
KR100656315B1 (en) 2005-01-07 2006-12-13 한국과학기술원 Apparatus for console game
US8681072B2 (en) 2005-02-18 2014-03-25 The Procter And Gamble Company Mobile display
US20110191269A1 (en) * 2005-02-18 2011-08-04 Judd Warren Weis Mobile Display
US20060187141A1 (en) * 2005-02-18 2006-08-24 Weis Judd W Mobile display
US7948447B2 (en) 2005-02-18 2011-05-24 The Procter & Gamble Company Mobile display
US20070132785A1 (en) * 2005-03-29 2007-06-14 Ebersole John F Jr Platform for immersive gaming
US20080113800A1 (en) * 2005-04-20 2008-05-15 Robotic Amusements, Inc. Game With Remotely Controlled Game Vehicles
US20090078858A1 (en) * 2005-04-21 2009-03-26 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Orientation determination utilizing a cordless device
US20100001950A1 (en) * 2005-04-21 2010-01-07 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Position determination utilizing a cordless device
US8384663B2 (en) 2005-04-21 2013-02-26 Avago Technologies General Ip (Singapore) Pte. Ltd. Position determination utilizing a cordless device
US7473884B2 (en) 2005-04-21 2009-01-06 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Orientation determination utilizing a cordless device
US20060237633A1 (en) * 2005-04-21 2006-10-26 Fouquet Julie E Orientation determination utilizing a cordless device
GB2425352B (en) * 2005-04-21 2010-07-14 Agilent Technologies Inc System and method of determining orientation information
US7737393B2 (en) 2005-04-21 2010-06-15 Avago Technologies Ecbu Ip (Singapore) Pte. Ltd. Orientation determination utilizing a cordless device
US20080113799A1 (en) * 2005-05-10 2008-05-15 Pixart Imaging Inc. Orientation device method for coordinate generation employed thereby
US7611412B2 (en) * 2005-05-10 2009-11-03 Pixart Imaging Inc. Orientation device and method for coordinate generation employed thereby
US20060284832A1 (en) * 2005-06-16 2006-12-21 H.P.B. Optoelectronics Co., Ltd. Method and apparatus for locating a laser spot
EP1894086A1 (en) * 2005-06-16 2008-03-05 SSD Company Limited Input device, simulated experience method and entertainment system
US20090231269A1 (en) * 2005-06-16 2009-09-17 Hiromu Ueshima Input device, simulated experience method and entertainment system
EP1894086A4 (en) * 2005-06-16 2010-06-30 Ssd Co Ltd Input device, simulated experience method and entertainment system
US20100046796A1 (en) * 2005-06-30 2010-02-25 Koninklijke Philips Electronics, N.V. method of recognizing a motion pattern of an object
AU2006203250B2 (en) * 2005-08-02 2012-06-14 Interactive Sports Technologies Inc Sports simulation system
US7860614B1 (en) * 2005-09-13 2010-12-28 The United States Of America As Represented By The Secretary Of The Army Trainer for robotic vehicle
US20150294480A1 (en) * 2005-10-26 2015-10-15 Sony Computer Entertainment Inc. Control Device for Communicating Visual Information
US9799121B2 (en) * 2005-10-26 2017-10-24 Sony Interactive Entertainment Inc. Control device for communicating visual information
US20150231490A1 (en) * 2005-11-14 2015-08-20 Microsoft Technology Licensing, Llc Stereo video for gaming
US9855496B2 (en) * 2005-11-14 2018-01-02 Microsoft Technology Licensing, Llc Stereo video for gaming
US20090282749A1 (en) * 2005-11-21 2009-11-19 Warminsky Michael F Re-configurable armored tactical personnel and collective training facility
US20070117503A1 (en) * 2005-11-21 2007-05-24 Warminsky Michael F Airflow ceiling ventilation system for an armored tactical personnel and collective training facility
US20070113487A1 (en) * 2005-11-21 2007-05-24 Amec Earth & Environmental, Inc. Re-configurable armored tactical personnel and collective training facility
US8186109B2 (en) 2005-11-21 2012-05-29 Uxb International, Inc. Re-configurable armored tactical personnel and collective training facility
US20070165007A1 (en) * 2006-01-13 2007-07-19 Gerald Morrison Interactive input system
US20070205994A1 (en) * 2006-03-02 2007-09-06 Taco Van Ieperen Touch system and method for interacting with the same
US20070238539A1 (en) * 2006-03-30 2007-10-11 Wayne Dawe Sports simulation system
US20090096714A1 (en) * 2006-03-31 2009-04-16 Brother Kogyo Kabushiki Kaisha Image display device
US7796119B2 (en) 2006-04-03 2010-09-14 Avago Technologies General Ip (Singapore) Pte. Ltd. Position determination with reference
US20070242034A1 (en) * 2006-04-03 2007-10-18 Haven Richard E Position determination with reference
US10491748B1 (en) 2006-04-03 2019-11-26 Wai Wu Intelligent communication routing system and method
EP2460569A3 (en) * 2006-05-04 2012-08-29 Sony Computer Entertainment America LLC Scheme for Detecting and Tracking User Manipulation of a Game Controller Body and for Translating Movements Thereof into Inputs and Game Commands
EP2351604A3 (en) * 2006-05-04 2012-01-25 Sony Computer Entertainment America LLC Obtaining input for controlling execution of a game program
US20100009308A1 (en) * 2006-05-05 2010-01-14 Align Technology, Inc. Visualizing and Manipulating Digital Models for Dental Treatment
US10022621B2 (en) * 2006-05-08 2018-07-17 Nintendo Co., Ltd. Methods and apparatus for using illumination marks for spatial pointing
US20070265075A1 (en) * 2006-05-10 2007-11-15 Sony Computer Entertainment America Inc. Attachable structure for use with hand-held controller having tracking ability
EP2645303A2 (en) * 2006-07-13 2013-10-02 Northrop Grumman Systems Corporation Gesture recognition inrterface system
EP2645303A3 (en) * 2006-07-13 2013-12-04 Northrop Grumman Systems Corporation Gesture recognition inrterface system
US20080100825A1 (en) * 2006-09-28 2008-05-01 Sony Computer Entertainment America Inc. Mapping movements of a hand-held controller to the two-dimensional image plane of a display screen
US8310656B2 (en) 2006-09-28 2012-11-13 Sony Computer Entertainment America Llc Mapping movements of a hand-held controller to the two-dimensional image plane of a display screen
US20080080789A1 (en) * 2006-09-28 2008-04-03 Sony Computer Entertainment Inc. Object detection using video input combined with tilt angle information
USRE48417E1 (en) 2006-09-28 2021-02-02 Sony Interactive Entertainment Inc. Object direction using video input combined with tilt angle information
US8781151B2 (en) 2006-09-28 2014-07-15 Sony Computer Entertainment Inc. Object detection using video input combined with tilt angle information
US20080098448A1 (en) * 2006-10-19 2008-04-24 Sony Computer Entertainment America Inc. Controller configured to track user's level of anxiety and other mental and physical attributes
US20080096657A1 (en) * 2006-10-20 2008-04-24 Sony Computer Entertainment America Inc. Method for aiming and shooting using motion sensing controller
US20080096654A1 (en) * 2006-10-20 2008-04-24 Sony Computer Entertainment America Inc. Game control using three-dimensional motions of controller
US9442607B2 (en) 2006-12-04 2016-09-13 Smart Technologies Inc. Interactive input system and method
US20080163055A1 (en) * 2006-12-06 2008-07-03 S.H. Ganz Holdings Inc. And 816877 Ontario Limited System and method for product marketing using feature codes
US8205158B2 (en) 2006-12-06 2012-06-19 Ganz Feature codes and bonuses in virtual worlds
US8549416B2 (en) 2006-12-06 2013-10-01 Ganz Feature codes and bonuses in virtual worlds
US20080136775A1 (en) * 2006-12-08 2008-06-12 Conant Carson V Virtual input device for computing
US20090280898A1 (en) * 2006-12-22 2009-11-12 Konami Digital Entertainment Co., Ltd. Game device, method of controlling game device, and information recording medium
US20080154743A1 (en) * 2006-12-22 2008-06-26 Stephan Holzner Method concerning the transport of dental prostheses
US8353748B2 (en) * 2006-12-22 2013-01-15 Konami Digital Entertainment Co., Ltd. Game device, method of controlling game device, and information recording medium
US20080153069A1 (en) * 2006-12-22 2008-06-26 Stephan Holzner Method, machine-readable medium and computer concerning the manufacture of dental prostheses
US20100192109A1 (en) * 2007-01-06 2010-07-29 Wayne Carl Westerman Detecting and Interpreting Real-World and Security Gestures on Touch and Hover Sensitive Devices
US9158454B2 (en) * 2007-01-06 2015-10-13 Apple Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
US20100211920A1 (en) * 2007-01-06 2010-08-19 Wayne Carl Westerman Detecting and Interpreting Real-World and Security Gestures on Touch and Hover Sensitive Devices
US9367235B2 (en) 2007-01-06 2016-06-14 Apple Inc. Detecting and interpreting real-world and security gestures on touch and hover sensitive devices
US20080215974A1 (en) * 2007-03-01 2008-09-04 Phil Harrison Interactive user controlled avatar animations
US9171397B2 (en) * 2007-03-06 2015-10-27 Wildtangent, Inc. Rendering of two-dimensional markup messages
US20120139912A1 (en) * 2007-03-06 2012-06-07 Wildtangent, Inc. Rendering of two-dimensional markup messages
US8115753B2 (en) 2007-04-11 2012-02-14 Next Holdings Limited Touch screen system with hover and click input methods
US20080259053A1 (en) * 2007-04-11 2008-10-23 John Newton Touch Screen System with Hover and Click Input Methods
US20090021476A1 (en) * 2007-07-20 2009-01-22 Wolfgang Steinle Integrated medical display system
US20090021475A1 (en) * 2007-07-20 2009-01-22 Wolfgang Steinle Method for displaying and/or processing image data of medical origin using gesture recognition
US8094137B2 (en) 2007-07-23 2012-01-10 Smart Technologies Ulc System and method of detecting contact on a display
GB2451461A (en) * 2007-07-28 2009-02-04 Naveen Chawla Camera based 3D user and wand tracking human-computer interaction system
US20120021386A1 (en) * 2007-08-01 2012-01-26 Airmax Group Plc Method and apparatus for providing information about a vehicle
US8432377B2 (en) 2007-08-30 2013-04-30 Next Holdings Limited Optical touchscreen with improved illumination
US8384693B2 (en) 2007-08-30 2013-02-26 Next Holdings Limited Low profile touch panel systems
US20090058833A1 (en) * 2007-08-30 2009-03-05 John Newton Optical Touchscreen with Improved Illumination
US8368721B2 (en) * 2007-10-06 2013-02-05 Mccoy Anthony Apparatus and method for on-field virtual reality simulation of US football and other sports
US20090091583A1 (en) * 2007-10-06 2009-04-09 Mccoy Anthony Apparatus and method for on-field virtual reality simulation of US football and other sports
US9171454B2 (en) 2007-11-14 2015-10-27 Microsoft Technology Licensing, Llc Magic wand
US20090215534A1 (en) * 2007-11-14 2009-08-27 Microsoft Corporation Magic wand
US9542809B2 (en) 2007-11-23 2017-01-10 Aristocrat Technologies Australia Pty Limited Gaming system and a method of gaming
US8348745B2 (en) 2007-11-23 2013-01-08 Aristocrat Technologies Australia Pty Limited Gaming system and a method of gaming including adding or deleting predetermined symbol from the second symbol store
US20090258694A1 (en) * 2007-11-23 2009-10-15 Aristocrat Technologies Australia Pty Limited Gaming system and a method of gaming
US20090172756A1 (en) * 2007-12-31 2009-07-02 Motorola, Inc. Lighting analysis and recommender system for video telephony
US20090237376A1 (en) * 2008-01-07 2009-09-24 Next Holdings Limited Optical Position Sensing System and Optical Position Sensor Assembly with Convex Imaging Window
US20090213093A1 (en) * 2008-01-07 2009-08-27 Next Holdings Limited Optical position sensor using retroreflection
US20090207144A1 (en) * 2008-01-07 2009-08-20 Next Holdings Limited Position Sensing System With Edge Positioning Enhancement
US20090213094A1 (en) * 2008-01-07 2009-08-27 Next Holdings Limited Optical Position Sensing System and Optical Position Sensor Assembly
US8405637B2 (en) 2008-01-07 2013-03-26 Next Holdings Limited Optical position sensing system and optical position sensor assembly with convex imaging window
US8405636B2 (en) 2008-01-07 2013-03-26 Next Holdings Limited Optical position sensing system and optical position sensor assembly
US20090256857A1 (en) * 2008-04-10 2009-10-15 Davidson Philip L Methods of interfacing with multi-input devices and multi-input display systems employing interfacing techniques
US9256342B2 (en) 2008-04-10 2016-02-09 Perceptive Pixel, Inc. Methods of interfacing with multi-input devices and multi-input display systems employing interfacing techniques
US9372591B2 (en) 2008-04-10 2016-06-21 Perceptive Pixel, Inc. Methods of interfacing with multi-input devices and multi-input display systems employing interfacing techniques
US8788967B2 (en) * 2008-04-10 2014-07-22 Perceptive Pixel, Inc. Methods of interfacing with multi-input devices and multi-input display systems employing interfacing techniques
US8335996B2 (en) 2008-04-10 2012-12-18 Perceptive Pixel Inc. Methods of interfacing with multi-input devices and multi-input display systems employing interfacing techniques
US20090259964A1 (en) * 2008-04-10 2009-10-15 Davidson Philip L Methods of interfacing with multi-input devices and multi-input display systems employing interfacing techniques
US20090259967A1 (en) * 2008-04-10 2009-10-15 Davidson Philip L Methods of interfacing with multi-input devices and multi-input display systems employing interfacing techniques
US8142030B2 (en) 2008-04-28 2012-03-27 Visteon Global Technologies, Inc. Reconfigurable center stack with touch sensing
US20090268163A1 (en) * 2008-04-28 2009-10-29 Upton Beall Bowden Reconfigurable center stack with touch sensing
US20090277694A1 (en) * 2008-05-09 2009-11-12 Smart Technologies Ulc Interactive Input System And Bezel Therefor
US20090278794A1 (en) * 2008-05-09 2009-11-12 Smart Technologies Ulc Interactive Input System With Controlled Lighting
US8902193B2 (en) 2008-05-09 2014-12-02 Smart Technologies Ulc Interactive input system and bezel therefor
US20090278799A1 (en) * 2008-05-12 2009-11-12 Microsoft Corporation Computer vision-based multi-touch sensing using infrared lasers
US8952894B2 (en) 2008-05-12 2015-02-10 Microsoft Technology Licensing, Llc Computer vision-based multi-touch sensing using infrared lasers
US8159455B2 (en) * 2008-07-18 2012-04-17 Apple Inc. Methods and apparatus for processing combinations of kinematical inputs
US20100013768A1 (en) * 2008-07-18 2010-01-21 Apple Inc. Methods and apparatus for processing combinations of kinematical inputs
US8847739B2 (en) 2008-08-04 2014-09-30 Microsoft Corporation Fusing RFID and vision for surface object tracking
US20100031202A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation User-defined gesture set for surface computing
US20100026470A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation Fusing rfid and vision for surface object tracking
US20100031203A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation User-defined gesture set for surface computing
US20100079385A1 (en) * 2008-09-29 2010-04-01 Smart Technologies Ulc Method for calibrating an interactive input system and interactive input system executing the calibration method
US20110205189A1 (en) * 2008-10-02 2011-08-25 John David Newton Stereo Optical Sensors for Resolving Multi-Touch in a Touch Detection System
US10099144B2 (en) 2008-10-08 2018-10-16 Interactive Sports Technologies Inc. Sports simulation system
US20100107184A1 (en) * 2008-10-23 2010-04-29 Peter Rae Shintani TV with eye detection
US20100103104A1 (en) * 2008-10-29 2010-04-29 Electronics And Telecommunications Research Institute Apparatus for user interface based on wearable computing environment and method thereof
US8339378B2 (en) 2008-11-05 2012-12-25 Smart Technologies Ulc Interactive input system with multi-angle reflector
US20100110005A1 (en) * 2008-11-05 2010-05-06 Smart Technologies Ulc Interactive input system with multi-angle reflector
US8102371B2 (en) * 2008-11-21 2012-01-24 Chung-Shan Institute Of Science And Technology, Armaments Bureau, Ministry Of National Defense Calibration method of projection effect
US20100127986A1 (en) * 2008-11-21 2010-05-27 Chih-Ming Liao Calibration method of projection effect
EP2199948A1 (en) * 2008-12-18 2010-06-23 Koninklijke Philips Electronics N.V. Method of plotting a 3D movement in a 1D graph and of comparing two arbitrary 3D movements
US20100199221A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Navigation of a virtual plane using depth
US9652030B2 (en) * 2009-01-30 2017-05-16 Microsoft Technology Licensing, Llc Navigation of a virtual plane using a zone of restriction for canceling noise
US10599212B2 (en) 2009-01-30 2020-03-24 Microsoft Technology Licensing, Llc Navigation of a virtual plane using a zone of restriction for canceling noise
US8423426B2 (en) 2009-03-13 2013-04-16 Nike, Inc. Method of customized cleat arrangement
US8577751B2 (en) 2009-03-13 2013-11-05 Nike, Inc. Method of customized cleat arrangement
US8219461B2 (en) * 2009-03-13 2012-07-10 Nike, Inc. Method of customized cleat arrangement
US20100235258A1 (en) * 2009-03-13 2010-09-16 Nike, Ine. Method Of Customized Cleat Arrangement
US7921591B1 (en) * 2009-04-30 2011-04-12 Terry Adcock Flip-up aiming sight
US8788943B2 (en) 2009-05-15 2014-07-22 Ganz Unlocking emoticons using feature codes
US20100293473A1 (en) * 2009-05-15 2010-11-18 Ganz Unlocking emoticons using feature codes
US10582144B2 (en) 2009-05-21 2020-03-03 May Patents Ltd. System and method for control based on face or hand gesture detection
US8614673B2 (en) 2009-05-21 2013-12-24 May Patents Ltd. System and method for control based on face or hand gesture detection
US8614674B2 (en) 2009-05-21 2013-12-24 May Patents Ltd. System and method for control based on face or hand gesture detection
US20100295782A1 (en) * 2009-05-21 2010-11-25 Yehuda Binder System and method for control based on face ore hand gesture detection
US20100295823A1 (en) * 2009-05-25 2010-11-25 Korea Electronics Technology Institute Apparatus for touching reflection image using an infrared screen
US9616526B2 (en) * 2009-05-29 2017-04-11 Kuka Roboter Gmbh Method and device for controlling an auxiliary tool axis of a tool being guided by a manipulator
US20100306715A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Gestures Beyond Skeletal
US10691216B2 (en) 2009-05-29 2020-06-23 Microsoft Technology Licensing, Llc Combining gestures beyond skeletal
US9383823B2 (en) * 2009-05-29 2016-07-05 Microsoft Technology Licensing, Llc Combining gestures beyond skeletal
US20100305757A1 (en) * 2009-05-29 2010-12-02 Kuka Roboter Gmbh Method And Device For Controlling An Auxiliary Tool Axis Of A Tool Being Guided By A Manipulator
US8692768B2 (en) 2009-07-10 2014-04-08 Smart Technologies Ulc Interactive input system
US20110080490A1 (en) * 2009-10-07 2011-04-07 Gesturetek, Inc. Proximity object tracker
US8547327B2 (en) * 2009-10-07 2013-10-01 Qualcomm Incorporated Proximity object tracker
US9317134B2 (en) 2009-10-07 2016-04-19 Qualcomm Incorporated Proximity object tracker
US8515128B1 (en) 2009-10-07 2013-08-20 Qualcomm Incorporated Hover detection
US8897496B2 (en) 2009-10-07 2014-11-25 Qualcomm Incorporated Hover detection
US20110095977A1 (en) * 2009-10-23 2011-04-28 Smart Technologies Ulc Interactive input system incorporating multi-angle reflecting structure
US20110199387A1 (en) * 2009-11-24 2011-08-18 John David Newton Activating Features on an Imaging Device Based on Manipulations
US20110205155A1 (en) * 2009-12-04 2011-08-25 John David Newton Methods and Systems for Position Detection Using an Interactive Volume
US20110148821A1 (en) * 2009-12-22 2011-06-23 Korea Electronics Technology Institute Infrared Screen-Type Space Touch Apparatus
US20110148822A1 (en) * 2009-12-22 2011-06-23 Korea Electronics Technology Institute Three-Dimensional Space Touch Apparatus Using Multiple Infrared Cameras
US8786576B2 (en) * 2009-12-22 2014-07-22 Korea Electronics Technology Institute Three-dimensional space touch apparatus using multiple infrared cameras
US20110157012A1 (en) * 2009-12-31 2011-06-30 Microsoft Corporation Recognizing interactive media input
US9207765B2 (en) * 2009-12-31 2015-12-08 Microsoft Technology Licensing, Llc Recognizing interactive media input
US10064693B2 (en) * 2010-01-14 2018-09-04 Brainlab Ag Controlling a surgical navigation system
US9542001B2 (en) * 2010-01-14 2017-01-10 Brainlab Ag Controlling a surgical navigation system
US20120323364A1 (en) * 2010-01-14 2012-12-20 Rainer Birkenbach Controlling a surgical navigation system
US20110197263A1 (en) * 2010-02-11 2011-08-11 Verizon Patent And Licensing, Inc. Systems and methods for providing a spatial-input-based multi-user shared display experience
US8522308B2 (en) 2010-02-11 2013-08-27 Verizon Patent And Licensing Inc. Systems and methods for providing a spatial-input-based multi-user shared display experience
US20170123510A1 (en) * 2010-02-23 2017-05-04 Muv Interactive Ltd. System for projecting content to a display surface having user- controlled size, shape and location/direction and apparatus and methods useful in conjunction therewith
US10528154B2 (en) * 2010-02-23 2020-01-07 Touchjet Israel Ltd System for projecting content to a display surface having user-controlled size, shape and location/direction and apparatus and methods useful in conjunction therewith
US9759917B2 (en) 2010-02-28 2017-09-12 Microsoft Technology Licensing, Llc AR glasses with event and sensor triggered AR eyepiece interface to external devices
US9875406B2 (en) 2010-02-28 2018-01-23 Microsoft Technology Licensing, Llc Adjustable extension for temple arm
US10860100B2 (en) 2010-02-28 2020-12-08 Microsoft Technology Licensing, Llc AR glasses with predictive control of external device based on event input
US10539787B2 (en) 2010-02-28 2020-01-21 Microsoft Technology Licensing, Llc Head-worn adaptive display
US10268888B2 (en) 2010-02-28 2019-04-23 Microsoft Technology Licensing, Llc Method and apparatus for biometric data capture
US20120194553A1 (en) * 2010-02-28 2012-08-02 Osterhout Group, Inc. Ar glasses with sensor and user action based control of external devices with feedback
US10180572B2 (en) 2010-02-28 2019-01-15 Microsoft Technology Licensing, Llc AR glasses with event and user action control of external applications
US8599134B2 (en) * 2010-03-11 2013-12-03 Ricoh Company, Ltd. Apparatus, method, and system for identifying laser spot
US20110221919A1 (en) * 2010-03-11 2011-09-15 Wenbo Zhang Apparatus, method, and system for identifying laser spot
US20110234542A1 (en) * 2010-03-26 2011-09-29 Paul Marson Methods and Systems Utilizing Multiple Wavelengths for Position Detection
US11117033B2 (en) 2010-04-26 2021-09-14 Wilbert Quinc Murdock Smart system for display of dynamic movement parameters in sports and training
US8593402B2 (en) 2010-04-30 2013-11-26 Verizon Patent And Licensing Inc. Spatial-input-based cursor projection systems and methods
US10521951B2 (en) * 2010-06-01 2019-12-31 Vladimir Vaganov 3D digital painting
US20170309057A1 (en) * 2010-06-01 2017-10-26 Vladimir Vaganov 3d digital painting
US10217264B2 (en) * 2010-06-01 2019-02-26 Vladimir Vaganov 3D digital painting
US20190206112A1 (en) * 2010-06-01 2019-07-04 Vladimir Vaganov 3d digital painting
US10922870B2 (en) * 2010-06-01 2021-02-16 Vladimir Vaganov 3D digital painting
US20110304650A1 (en) * 2010-06-09 2011-12-15 The Boeing Company Gesture-Based Human Machine Interface
US9569010B2 (en) * 2010-06-09 2017-02-14 The Boeing Company Gesture-based human machine interface
CN102279670A (en) * 2010-06-09 2011-12-14 波音公司 Gesture-based human machine interface
EP2395413A1 (en) * 2010-06-09 2011-12-14 The Boeing Company Gesture-based human machine interface
US8842166B2 (en) 2010-06-14 2014-09-23 Nintendo Co., Ltd. Storage medium having stored therein stereoscopic image display program, stereoscopic image display device, stereoscopic image display system, and stereoscopic image display method
US9001192B2 (en) 2010-06-14 2015-04-07 Nintendo Co., Ltd. Storage medium having stored therein stereoscopic image display program, stereoscopic image display device, stereoscopic image display system, and stereoscopic image display method
US8902298B2 (en) 2010-06-14 2014-12-02 Nintendo Co., Ltd. Storage medium having stored therein stereoscopic image display program, stereoscopic image display device, stereoscopic image display system, and stereoscopic image display method
CN102243528A (en) * 2010-06-22 2011-11-16 微软公司 Providing directional force feedback in free space
US11470303B1 (en) 2010-06-24 2022-10-11 Steven M. Hoffberg Two dimensional to three dimensional moving image converter
GB2482729A (en) * 2010-08-13 2012-02-15 Monnowtone Ltd An augmented reality musical instrument simulation system
US8467978B2 (en) 2010-08-31 2013-06-18 The Boeing Company Identifying features on a surface of an object using wavelet analysis
US9167289B2 (en) 2010-09-02 2015-10-20 Verizon Patent And Licensing Inc. Perspective display systems and methods
US8957856B2 (en) 2010-10-21 2015-02-17 Verizon Patent And Licensing Inc. Systems, methods, and apparatuses for spatial input associated with a display
US9092135B2 (en) * 2010-11-01 2015-07-28 Sony Computer Entertainment Inc. Control of virtual object using device touch interface functionality
US9372624B2 (en) 2010-11-01 2016-06-21 Sony Interactive Entertainment Inc. Control of virtual object using device touch interface functionality
US9575594B2 (en) 2010-11-01 2017-02-21 Sony Interactive Entertainment Inc. Control of virtual object using device touch interface functionality
US20120110447A1 (en) * 2010-11-01 2012-05-03 Sony Computer Entertainment Inc. Control of virtual object using device touch interface functionality
CN102566825A (en) * 2010-12-08 2012-07-11 纬创资通股份有限公司 Method for positioning compensation and optical touch module thereof
EP2656308A4 (en) * 2010-12-21 2016-09-28 Samsung Electronics Co Ltd Method, device, and system for providing sensory information and sense
US9483116B2 (en) 2010-12-21 2016-11-01 Samsung Electronics Co., Ltd Method, device, and system for providing sensory information and sense
WO2012086984A2 (en) 2010-12-21 2012-06-28 Samsung Electronics Co., Ltd. Method, device, and system for providing sensory information and sense
US9582144B2 (en) * 2011-01-20 2017-02-28 Blackberry Limited Three-dimensional, multi-depth presentation of icons associated with a user interface
US20120192114A1 (en) * 2011-01-20 2012-07-26 Research In Motion Corporation Three-dimensional, multi-depth presentation of icons associated with a user interface
CN103348305A (en) * 2011-02-04 2013-10-09 皇家飞利浦有限公司 Gesture controllable system which uses proprioception to create absolute frame of reference
WO2012104772A1 (en) 2011-02-04 2012-08-09 Koninklijke Philips Electronics N.V. Gesture controllable system uses proprioception to create absolute frame of reference
RU2605349C2 (en) * 2011-02-04 2016-12-20 Конинклейке Филипс Н.В. Gesture controllable system using proprioception to create absolute frame of reference
US20120238366A1 (en) * 2011-03-15 2012-09-20 Maurice Tedder Robot Game for Multiple Players that is Remotely Controlled over a Network
CN102736733A (en) * 2011-04-15 2012-10-17 英吉尼克斯公司 Electronic systems with touch free input devices and associated methods
US20120262366A1 (en) * 2011-04-15 2012-10-18 Ingeonix Corporation Electronic systems with touch free input devices and associated methods
US9491520B2 (en) * 2011-06-13 2016-11-08 Samsung Electronics Co., Ltd. Display apparatus and method for controlling display apparatus and remote controller having a plurality of sensor arrays
US20120314022A1 (en) * 2011-06-13 2012-12-13 Samsung Electronics Co., Ltd. Display apparatus and method for controlling display apparatus and remote controller
US20140282008A1 (en) * 2011-10-20 2014-09-18 Koninklijke Philips N.V. Holographic user interfaces for medical procedures
CN103988142A (en) * 2011-11-04 2014-08-13 托比伊科技公司 Portable device
US10158750B2 (en) 2011-11-04 2018-12-18 Remote TelePointer, LLC Method and system for user interface for interactive devices using a mobile device
CN108170218A (en) * 2011-11-04 2018-06-15 托比公司 Mancarried device
US9465415B2 (en) 2011-11-04 2016-10-11 Tobii Ab Portable device
US9462210B2 (en) * 2011-11-04 2016-10-04 Remote TelePointer, LLC Method and system for user interface for interactive devices using a mobile device
US10757243B2 (en) * 2011-11-04 2020-08-25 Remote Telepointer Llc Method and system for user interface for interactive devices using a mobile device
WO2013064320A1 (en) * 2011-11-04 2013-05-10 Tobii Technology Ab Portable device
US10037086B2 (en) 2011-11-04 2018-07-31 Tobii Ab Portable device
US20130113993A1 (en) * 2011-11-04 2013-05-09 Remote TelePointer, LLC Method and system for user interface for interactive devices using a mobile device
US10061393B2 (en) 2011-11-04 2018-08-28 Tobii Ab Portable device
EP2590047A1 (en) * 2011-11-04 2013-05-08 Tobii Technology AB Portable device
US9772690B2 (en) 2011-11-04 2017-09-26 Tobii Ab Portable device
US10409388B2 (en) 2011-11-04 2019-09-10 Tobii Ab Portable device
KR101946669B1 (en) * 2011-11-04 2019-02-11 토비 에이비 Portable device
US8975585B2 (en) * 2011-11-28 2015-03-10 Eads Deutschland Gmbh Method and device for tracking a moving target object
US20140061478A1 (en) * 2011-11-28 2014-03-06 Eads Deutschland Gmbh Method and Device for Tracking a Moving Target Object
US11385724B2 (en) 2011-12-02 2022-07-12 Intel Corporation Techniques for notebook hinge sensors
US10459527B2 (en) 2011-12-02 2019-10-29 Intel Corporation Techniques for notebook hinge sensors
US11809636B2 (en) 2011-12-02 2023-11-07 Intel Corporation Techniques for notebook hinge sensors
US10936084B2 (en) 2011-12-02 2021-03-02 Intel Corporation Techniques for notebook hinge sensors
US20140317576A1 (en) * 2011-12-06 2014-10-23 Thomson Licensing Method and system for responding to user's selection gesture of object displayed in three dimensions
US20130231771A1 (en) * 2012-03-02 2013-09-05 Massachusetts Institute Of Technology Methods and Apparatus for Handheld Tool
US10005312B2 (en) 2012-03-02 2018-06-26 Massachusetts Institute Of Technology Methods and apparatus for handheld tool
US9342632B2 (en) * 2012-03-02 2016-05-17 Massachusetts Institute Of Technology Methods and apparatus for handheld tool
US8761964B2 (en) * 2012-03-26 2014-06-24 Hon Hai Precision Industry Co., Ltd. Computing device and method for controlling unmanned aerial vehicle in flight space
US20130253733A1 (en) * 2012-03-26 2013-09-26 Hon Hai Precision Industry Co., Ltd. Computing device and method for controlling unmanned aerial vehicle in flight space
US9489951B2 (en) 2012-05-30 2016-11-08 Nec Corporation Information processing system, information processing method, communication terminal, information processing apparatus, and control method and control program thereof
EP2857958A4 (en) * 2012-05-30 2016-03-23 Nec Corp Information processing system, information processing method, communication terminal, information processing device and control method and control program therefor
US20190262699A1 (en) * 2012-06-04 2019-08-29 Sony Interactive Entertainment Inc. Split-screen presentation based on user location
US11065532B2 (en) * 2012-06-04 2021-07-20 Sony Interactive Entertainment Inc. Split-screen presentation based on user location and controller location
US20140092016A1 (en) * 2012-09-28 2014-04-03 Pixart Imaging Inc. Handheld Pointing Device and Operation Method Thereof
CN103729092A (en) * 2012-10-12 2014-04-16 原相科技股份有限公司 Handheld pointing device and control method thereof
US9952666B2 (en) 2012-11-27 2018-04-24 Facebook, Inc. Systems and methods of eye tracking control on mobile device
US9612656B2 (en) 2012-11-27 2017-04-04 Facebook, Inc. Systems and methods of eye tracking control on mobile device
US9541634B2 (en) 2012-11-30 2017-01-10 WorldViz LLC Precision position tracking system
US9692990B2 (en) 2012-11-30 2017-06-27 WorldViz LLC Infrared tracking system
US9110503B2 (en) 2012-11-30 2015-08-18 WorldViz LLC Precision position tracking device
CN110362205A (en) * 2012-12-03 2019-10-22 高通股份有限公司 Device and method for the contactless gesture system of infrared ray
US8914163B2 (en) * 2013-02-26 2014-12-16 Honda Motor Co., Ltd. System and method for incorporating gesture and voice recognition into a single system
US9211854B2 (en) * 2013-02-26 2015-12-15 Honda Motor Co., Ltd. System and method for incorporating gesture and voice recognition into a single system
US20140371955A1 (en) * 2013-02-26 2014-12-18 Edge 3 Technologies Llc System And Method For Incorporating Gesture And Voice Recognition Into A Single System
US8744645B1 (en) * 2013-02-26 2014-06-03 Honda Motor Co., Ltd. System and method for incorporating gesture and voice recognition into a single system
US20140244072A1 (en) * 2013-02-26 2014-08-28 Pedram Vaghefinazari System And Method For Incorporating Gesture And Voice Recognition Into A Single System
US20160027199A1 (en) * 2013-03-01 2016-01-28 Xiang Cao Object creation using body gestures
US9928634B2 (en) * 2013-03-01 2018-03-27 Microsoft Technology Licensing, Llc Object creation using body gestures
US20140254870A1 (en) * 2013-03-11 2014-09-11 Lenovo (Singapore) Pte. Ltd. Method for recognizing motion gesture commands
US20140267033A1 (en) * 2013-03-14 2014-09-18 Omnivision Technologies, Inc. Information Technology Device Input Systems And Associated Methods
US9294539B2 (en) * 2013-03-14 2016-03-22 Microsoft Technology Licensing, Llc Cooperative federation of digital devices via proxemics and device micro-mobility
US10185406B2 (en) * 2013-03-14 2019-01-22 Omnivision Technologies, Inc. Information technology device input systems and associated methods
US20140280748A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Cooperative federation of digital devices via proxemics and device micro-mobility
US9774653B2 (en) 2013-03-14 2017-09-26 Microsoft Technology Licensing, Llc Cooperative federation of digital devices via proxemics and device micro-mobility
US20160104037A1 (en) * 2013-05-07 2016-04-14 Zienon Llc Method and device for generating motion signature on the basis of motion signature information
US10779337B2 (en) 2013-05-07 2020-09-15 Hangzhou Zhileng Technology Co. Ltd. Method, apparatus and system for establishing connection between devices
US9671868B2 (en) 2013-06-11 2017-06-06 Honeywell International Inc. System and method for volumetric computing
CN104345881A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Method for processing information and electronic equipment
US20150078617A1 (en) * 2013-09-13 2015-03-19 Research & Business Foundation Sungkyunkwan University Mobile terminal and method for generating control command using marker attached to finger
US9304598B2 (en) * 2013-09-13 2016-04-05 Research And Business Foundation Sungkyunkwan University Mobile terminal and method for generating control command using marker attached to finger
US9781475B2 (en) 2013-09-27 2017-10-03 Beijing Lenovo Software Ltd. Information processing method, system and electronic device
CN104516660A (en) * 2013-09-27 2015-04-15 联想(北京)有限公司 Information processing method and system and electronic device
US9870357B2 (en) * 2013-10-28 2018-01-16 Microsoft Technology Licensing, Llc Techniques for translating text via wearable computing device
US20150120279A1 (en) * 2013-10-28 2015-04-30 Linkedin Corporation Techniques for translating text via wearable computing device
US9785243B2 (en) 2014-01-30 2017-10-10 Honeywell International Inc. System and method for providing an ergonomic three-dimensional, gesture based, multimodal interface for use in flight deck applications
US11195233B1 (en) * 2014-06-12 2021-12-07 Allstate Insurance Company Virtual simulation for insurance
US11216887B1 (en) * 2014-06-12 2022-01-04 Allstate Insurance Company Virtual simulation for insurance
US11861724B2 (en) 2014-06-12 2024-01-02 Allstate Insurance Company Virtual simulation for insurance
US11530050B2 (en) * 2014-10-17 2022-12-20 Sony Corporation Control device, control method, and flight vehicle device
US20180273201A1 (en) * 2014-10-17 2018-09-27 Sony Corporation Control device, control method, and flight vehicle device
US20230070563A1 (en) * 2014-10-17 2023-03-09 Sony Group Corporation Control device, control method, and flight vehicle device
US11884418B2 (en) * 2014-10-17 2024-01-30 Sony Group Corporation Control device, control method, and flight vehicle device
CN104407693A (en) * 2014-10-23 2015-03-11 张国培 Somatosensory operation method, somatosensory operation system and data processing device of small smart device
US10495726B2 (en) 2014-11-13 2019-12-03 WorldViz, Inc. Methods and systems for an immersive virtual reality system using multiple active markers
US9804257B2 (en) 2014-11-13 2017-10-31 WorldViz LLC Methods and systems for an immersive virtual reality system using multiple active markers
US10416708B2 (en) 2015-02-10 2019-09-17 Nintendo Co., Ltd. Accessory and information processing system
EP3614655A1 (en) * 2015-02-10 2020-02-26 Nintendo Co., Ltd. Accessory and information processing system
EP3057297B1 (en) * 2015-02-10 2019-11-20 Nintendo Co., Ltd. Accessory and information processing system
US10514723B2 (en) 2015-02-10 2019-12-24 Nintendo Co., Ltd. Accessory and information processing system
CN104766315A (en) * 2015-03-30 2015-07-08 联想(北京)有限公司 Method for calibrating relative position relation between image collection device and display screen and equipment
CN106156398A (en) * 2015-05-12 2016-11-23 西门子保健有限责任公司 For the operating equipment of area of computer aided simulation and method
EP3133592A1 (en) * 2015-08-19 2017-02-22 Samsung Electronics Co., Ltd. Display apparatus and controlling method thereof for the selection of clothes
US10424116B2 (en) 2015-08-19 2019-09-24 Samsung Electronics Co., Ltd. Display apparatus and controlling method thereof
US20170205939A1 (en) * 2015-09-10 2017-07-20 Boe Technology Group Co., Ltd. Method and apparatus for touch responding of wearable device as well as wearable device
US10185433B2 (en) * 2015-09-10 2019-01-22 Boe Technology Group Co., Ltd. Method and apparatus for touch responding of wearable device as well as wearable device
US20170098014A1 (en) * 2015-10-06 2017-04-06 Radix Inc. System and Method for Generating Digital Information and Altering Digital Models of Components Wtih Same
US10599710B2 (en) * 2015-10-06 2020-03-24 Radix Inc. System and method for generating digital information and altering digital models of components with same
US10269089B2 (en) * 2015-12-16 2019-04-23 WorldViz, Inc. Multi-user virtual reality processing
US9990689B2 (en) 2015-12-16 2018-06-05 WorldViz, Inc. Multi-user virtual reality processing
US10095928B2 (en) 2015-12-22 2018-10-09 WorldViz, Inc. Methods and systems for marker identification
US10452916B2 (en) 2015-12-22 2019-10-22 WorldViz, Inc. Methods and systems for marker identification
US10607340B2 (en) * 2016-02-17 2020-03-31 Samsung Electronics Co., Ltd. Remote image transmission system, display apparatus, and guide displaying method thereof
US10922890B1 (en) 2016-05-03 2021-02-16 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
US10500482B2 (en) * 2016-05-03 2019-12-10 Performance Designed Products Llc Method of operating a video gaming system
US10242501B1 (en) 2016-05-03 2019-03-26 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
US10245506B2 (en) * 2016-05-03 2019-04-02 Performance Designed Products Llc Video gaming system and method of operation
US20170319951A1 (en) * 2016-05-03 2017-11-09 Performance Designed Products Llc Video gaming system and method of operation
US11450073B1 (en) 2016-05-03 2022-09-20 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
DE102016208095A1 (en) 2016-05-11 2017-11-16 Bayerische Motoren Werke Aktiengesellschaft Means of transport, arrangement and method for the entertainment of a user of a means of transportation
US20190164392A1 (en) * 2016-07-28 2019-05-30 Sandmanden Storsække Service V/ Flemming Petersen Method for warning a user, a wearable warning system and use of the system
US20180050266A1 (en) * 2016-08-16 2018-02-22 Square Enix Co., Ltd. Program, article selection system, terminal device and article selection method
DE102016013028A1 (en) * 2016-11-02 2018-05-03 Friedrich-Schiller-Universität Jena Method and device for precise position determination of arrow-like objects relative to surfaces
CN106601217A (en) * 2016-12-06 2017-04-26 北京邮电大学 Interactive-type musical instrument performing method and device
US10336188B2 (en) * 2017-03-15 2019-07-02 Subaru Corporation Vehicle display system and method of controlling vehicle display system
US10403050B1 (en) * 2017-04-10 2019-09-03 WorldViz, Inc. Multi-user virtual and augmented reality tracking systems
US11393037B1 (en) * 2017-06-29 2022-07-19 State Farm Mutual Automobile Insurance Company Movement-based device control
CN107167077A (en) * 2017-07-07 2017-09-15 京东方科技集团股份有限公司 Stereo Vision Measurement System and stereo vision measurement method
US10656099B2 (en) * 2017-11-28 2020-05-19 Wuhan China Star Optoelectronics Semiconductor Display Technology Co., Ltd. Monitoring method and monitoring apparatus of thimble bases
US11123863B2 (en) * 2018-01-23 2021-09-21 Seiko Epson Corporation Teaching device, robot control device, and robot system
US10782779B1 (en) * 2018-09-27 2020-09-22 Apple Inc. Feedback coordination for a virtual interaction
US11230005B2 (en) * 2019-01-24 2022-01-25 Fanuc Corporation Following robot and work robot system
US11879959B2 (en) 2019-05-13 2024-01-23 Cast Group Of Companies Inc. Electronic tracking device and related system
US11517146B2 (en) 2019-05-21 2022-12-06 Whirlpool Corporation Cooking assistance appliance
USD927996S1 (en) 2019-05-21 2021-08-17 Whirlpool Corporation Cooking assistance appliance
US11389735B2 (en) 2019-10-23 2022-07-19 Ganz Virtual pet system
US11872498B2 (en) 2019-10-23 2024-01-16 Ganz Virtual pet system
US11599257B2 (en) * 2019-11-12 2023-03-07 Cast Group Of Companies Inc. Electronic tracking device and charging apparatus
US11829596B2 (en) * 2019-11-12 2023-11-28 Cast Group Of Companies Inc. Electronic tracking device and charging apparatus
US20230195297A1 (en) * 2019-11-12 2023-06-22 Cast Group Of Companies Inc. Electronic tracking device and charging apparatus
US11358059B2 (en) 2020-05-27 2022-06-14 Ganz Live toy system
CN117012140A (en) * 2023-08-01 2023-11-07 苏州旭智设计营造有限公司 Exhibition hall LED screen scene experience type starting control device

Also Published As

Publication number Publication date
US8614668B2 (en) 2013-12-24
US7843429B2 (en) 2010-11-30
US20120040755A1 (en) 2012-02-16
US8847887B2 (en) 2014-09-30
US20110059798A1 (en) 2011-03-10
US20130169527A1 (en) 2013-07-04
US8760398B2 (en) 2014-06-24
US20060033713A1 (en) 2006-02-16
US8068095B2 (en) 2011-11-29
US8736548B2 (en) 2014-05-27
US20130249791A1 (en) 2013-09-26
US20130179034A1 (en) 2013-07-11

Similar Documents

Publication Publication Date Title
US8760398B2 (en) Interactive video based games using objects sensed by TV cameras
US6720949B1 (en) Man machine interfaces and applications
US8194924B2 (en) Camera based sensing in handheld, mobile, gaming or other devices
US9939911B2 (en) Computer interface for remotely controlled objects and wearable articles with absolute pose detection component
US7826641B2 (en) Apparatus and method for determining an absolute pose of a manipulated object in a real three-dimensional environment with invariant features
Novacek et al. Overview of controllers of user interface for virtual reality
Loviscach Playing with all senses: Human–Computer interface devices for games
Worrallo A multiple optical tracking based approach for enhancing hand-based interaction in virtual reality simulations
Logsdon Arm-Hand-Finger Video Game Interaction
Hernoux et al. Is a Time-Of-Flight Camera Better than a Mouse for 3D Direct Selection in Virtual Worlds?
Thomas A virtual musical instrument exhibit for a science centre.
JPH07281826A (en) Coordinate input device

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION