This is all from a discussion that took place in the public chat (well, my brain fart anyway, after Eddie and a few others were discussing Natal only accurately working at 13 feet). But here's my entire thought process here:
I figured it out!
Since the two lenses are fixed in position and tracking is handled by
the software or firmware, ms would have to essentially have the lenses have a sweet spot that both lenses lines of site cross to help the SW
determine depth (which might be the 13 feet). Anything closer or further
away will essentially cause the objects to double, making it harder to
track. Proof in concept? look at an object relatively close (say, 6
feet). alternately blink your eyes. The object looks about the same (everything outside of it, though, relatively shifts position).
Now, while still focusing on that same object, hold up a finger out in
front of you and alternately blink again and notice the relative
position of the finger. Big variation, right? Do the same thing, but
focus on your finger instead and take note of the relative position of
the background object(s). This is why Natal could fail. A lot of the
software is going to rely on people and objects to be in that sweet spot
for the purpose of depth and accurately determine distance, etc.
But the lensess
dont have a 180 degree viewing field, and distant people would make it
hard to focus on, so to compensate for needing to handle tracking 4
people, you'd have to make that sweet spot focus on a plane where 4
people can stand or sit comfortably without being uncomfortably close to
one another, so 13 feet seems pretty realistic. The fixed duo lens tech
might wind up being the device's undoing since most TV sets sit much
closer to peoples' viewing area than the supposed 13 feet. (Not only that, but a lot of people don't use their consoles as family-fun-time systems, and the ones that do most likely don't all sit around a small couch 6 feet from the TV like I do :P).
The only way you
could resolve such an issue, though, would be if you could calibrate the
lens angles to where that 'sweet spot' would line up with one's
viewing area. Or, unrealistically, a camera with two lenses that
dynamically change their angle to follow a subject much in the same
way eyes can follow things and focus on them separate from the forward
facing position of the head.
I remember reading a while back on how Microsoft dropped image processing through use of the camera hardware itself. This was probably to drive down costs of the camera, so, instead, the camera relies on the processing capabilities of the 360 itself, so it will be using resources from the system to do the post-porcessing and work with the games, leaving less resources for the games.
Don't get me wrong, the 360 is a pretty powerful system, but it has nowhere near the FLOPS of the PS3's processors, and yes, the PS3 does a lot of the post-processing of the Eye's image, but it has more resources to do so. This is why the Eye and Move combo might prove to be better in terms of capabilities and 3D tracking (one, most games will definitely need some sort of button input). Even the Eye is capable of tracking faces and the like (as well as other objects; look at EyePet).
Going back to the 13 foot focal plane, though, it probably could only handle more accurate 3D movement maybe within a 5 foot box forward or backward from that 13 foot radius, since perspective changes dramatically based on distance. That would mean a good 8 to 18 feet of somewhat accurate depth tracking, but it still won't be 1:1 like the Eye/Move, since that uses the orb's size itself, coupled with the positions of the accelerometers and gyros within the controllers to determine location and orientation. Not only that, but the Eye doesn't have a defined 'sweet spot', so no matter how close you get, it is still accurately able to track it (the further you get, on the other hand, tracking becomes more difficult since size and distinctive features become less and less apparent, but that would be for either of them; then again, who is going to be playing a game 20 feet away? lol)
Anyway, that's it for my 2AM brain fart
wow all that off of your chest and no one responded? what a shame.
i am going to see how well kinect does when it comes out. i think the lag is going to be annoying, i am sure they will tweak is as they get feedback, but hopefully it's nothing like the so called tweaking they did with the red ring of death. i think i read somewhere that the prefered distance is 6-13 feet.
not 100% sure it was 6-13 or 4-13 but it's somewhere in there. also the kinect camera rotates and follows you to some extent and recalibrates if it needs to refocus i believe which it does rather quickly.
Sony made kinect then said that the 3d cameras of now are garbage so sold it to microsoft and came up with move… plus if you are over 5foot 7 which most people are then it has a hard time tracking you…
It's not so much that it can't track a person taller than 5' 7" it's that you have to angle the camera up higher or step back further to get your whole body in the eye of the camera.
But if there was a 5' 7" and 6' 2" guy playing at the same time, then you either both have to be really far away from the camera or have just one of you tracking from the waist up I guess.
Most Users Ever Online: 349
Currently Browsing this Page:
Guest Posters: 191
Newest Members: solariz