Panasonic AG-3DA1 3D Production Post White Paper

Panasonic AG-3DA1 Manual

Panasonic AG-3DA1 manual content summary:

  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 1
    fixed depth-of-field and the fixed point-of-view of the lenses that capture the images. As a result of these constraints, viewers watching simulated 3D can no longer alter their point-of-view simply by shifting the position of their heads, as they can in the real world. And when
  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 2
    2D filmmaking (among them: color effects, lens distortions, and a wide depth-of-field), 3D filmmakers gain the opportunity to exploit depth and convergence, as it applies in 3D imaging, refers to the rotation, or toeingin, of the lenses of the two cameras in the 3D rig. The location of the point
  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 3
    of its right-eye image. When a viewer converges the image pair, the 3D object seems to be located in front of the screen plane. When the parallax left of its right-eye image. When a viewer converges the image pair, the 3D object seems to be located behind the screen plane. When the parallax value is
  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 4
    converged, pulling the viewer into the screen space. In terms of I/O, the use of wider-than-normal and narrower-than-normal camera configurations, as noted above, can introduce unwanted distortions. However, the informed use of exaggerated interoculars can help filmmakers enhance the impact
  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 5
    Production Decisions As illustrated in Fig.5, the cameras in the 3D rig may be configured to shoot (1) in interest in the scene. The process of setting convergence involves toeing-in the lenses of the cameras so that the right and left images of a particular target overlap. The target for setting
  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 6
    Check As You Go The avoidance of unwanted artifacts during production is best achieved on location by evaluating shots with a 3D monitor and by screening 3D dailies on a screen that matches the size of the display on which the film will be seen in its target market. As further insurance, simple
  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 7
    , a filmmaker may be able to include a larger area of the set within the scene, increasing the width of the 3D zone in the space closest to the camera. While filmmakers are free to converge their lenses wherever they wish, audience tests have established a comfort zone within which parallax values
  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 8
    conversation between two actors as they stand against a wall, a 3D filmmaker, to take full advantage of the medium, might prefer to stage the same by overly-rapid camera and/or subject motion. To avoid this reading lag, 3D filmmakers often choose to slow the pace of the camera movement and to stage
  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 9
    filmmakers who wish to effectively capture live events may need to depend upon the deployment of yet-to-be-invented robocams-remotecontrolled 3D cameras that can unobtrusively dip in and out of the subject space without interfering with the freedom of movement of the performers or the players and
  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 10
    filmmakers to shoot most of the action in orthostereographic mode, with their cameras mounted in side-by-side configuration, the I/O of their lenses set case of adults. Until their audiences become accustomed to rapidly reading 3D images, editors working on films that are intended for general
  • Panasonic AG-3DA1 | 3D Production Post White Paper - Page 11
    that is larger than that of the average human eye (2.5"). Results in wall-eye. interocular - horizontal displacement of the lenses of the cameras. hyperstereo - the effect of an interocular that is larger than that of the average human eye (2.5"). miniaturization - an artifact that results from
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11

3D Production and Post
Barry Clark
03-26-10
Real World 3D
When a viewer’s eyes focus on a real object, they automatically converge on the object.
From the separate perspectives seen by the two eyes, the viewer’s brain fuses a coherent
3D image of the object. All of the objects that the viewer sees in 3D occupy a cone that is
bounded by the edges of the overlapping fields of focus and convergence of the viewer’s
eyes. Everything outside of this cone is seen by the viewer in 2D. As the viewer’s eyes
focus on progressively more distant objects, the zone of convergence shifts with the zone
of focus and the cone shrinks in width until an
outer limit of distance is reached—a distance of
100-200 yards in the average adult—beyond
which the viewer can no longer distinguish the
perspectives seen by the left and
right eyes.
Everything that is located further away from the
viewer seems to lie on a flat, 2D plane. To judge
the relative position in space of objects that lie
beyond this
stereoscopic limit,
a viewer must rely
on
monoscopic depth cues
, including
motion
cues
(nearby objects seem to shift position more
rapidly than distant objects),
atmospheric
cues
(the hue of objects shifts toward blue as they move
into the distance), and
occlusion
cues (near
objects obscure the view of more distant objects).
Fig.1 – Real World 3D
Simulated 3D
The experience of viewing a 3D film is significantly different from the way a viewer sees
3D in the real world. The most obvious differences between real world 3D and the
simulated 3D that is viewed on a screen are a consequence of the fixed depth-of-field and
the fixed point-of-view of the lenses that capture the images. As a result of these
constraints, viewers watching simulated 3D can no longer alter their point-of-view simply
by shifting the position of their heads, as they can in the real world. And when turning
their attention from one object of interest to another, they can no longer simply refocus
their eyes, as they can in the real world. In a 3D film, the point-of-view and the focus are
invariables established on the set. In addition, when looking at a 3D object displayed on a
screen, a viewer’s eyes must focus on the screen while, at the same time, they converge on
a point in space that may be located
beyond
the screen,
on
the screen, or
in front of
the
screen. As a result of this process—which differs from the way a viewer sees the world—
the viewer has the sensation that the 3D object is located either in the space beyond the
screen, on the screen plane, or in front of the screen. A 3D object that appears to be
located on the screen plane is relatively easy for a viewer to watch. But, over time, a
viewer may experience eyestrain from the effort involved in fusing coherent 3D images of
objects that reside far beyond or far in front of the screen.