Constructs a new Pipeline.
Emitted when the frame is updated.
Draw the camera to the screen as a full screen quad.
Please note this function modifies some GL state during its operation so you may need to reset the following GL state if you use it:
null
(e.g. gl.bindTexture(gl.TEXTURE_2D, null)
)null
(e.g. gl.bindBuffer(gl.ARRAY_BUFFER, null);
)null
(e.g. gl.useProgram(null)
)gl.TEXTURE0
(e.g. gl.activeTexture(gl.TEXTURE0)
)gl.SCISSOR_TEST
, gl.DEPTH_TEST
, gl.BLEND
, gl.CULL_FACE
The width of the canvas.
The height of the canvas.
Pass true
to mirror the camera image in the X-axis.
Returns the most recent camera frame texture.
Returns a matrix that you can use to transform the UV coordinates of the following full-screen quad in order to render the camera texture:
Vertex 0: -1, -1, 0
UV 0: 0, 0
Vertex 1: -1, 1, 0
UV 1: 0, 1
Vertex 2: 1, -1, 0
UV 1: 1, 0
Vertex 3: 1, 1, 0
UV 1: 1, 1
The width of the canvas.
The height of the canvas.
Pass true
to mirror the camera image in the X-axis.
A 4x4 column-major transformation matrix.
Uploads the current camera frame to a WebGL texture.
Returns true if the current camera frame came from a user-facing camera
Returns the camera model (i.e. the intrinsic camera parameters) for the current frame.
Returns a transformation where the camera sits, stationary, at the origin of world space, and points down the negative Z axis.
In this mode, tracked anchors move in world space as the user moves the device or tracked objects in the real world.
A 4x4 column-major transformation matrix
Returns a transformation where the camera sits at the origin of world space, but rotates as the user rotates the physical device.
When the Zappar library initializes, the negative Z axis of world space points forward in front of the user.
In this mode, tracked anchors move in world space as the user moves the device or tracked objects in the real world.
Pass true
to mirror the location in the X-axis.
A 4x4 column-major transformation matrix
Returns a transformation with the (camera-relative) origin specified by the supplied parameter.
This is used with the poseCameraRelative(...) : Float32Array
functions provided by the various anchor types to allow a given anchor (e.g. a tracked image or face) to be the origin of world space.
In this case the camera moves and rotates in world space around the anchor at the origin.
The origin matrix.
A 4x4 column-major transformation matrix
Destroys the pipeline.
Returns the number of the current frame.
Updates the pipeline and trackers to expose tracking data from the most recently processed camera frame.
Informs the pipeline that the GL context is lost and should not be used.
Sets the WebGL context used for the processing and upload of camera textures.
The WebGL context.
Prepares camera frames for processing.
Call this function on your pipeline once an animation frame (e.g. during your requestAnimationFrame
function) in order to process incoming camera frames.
Please note this function modifies some GL state during its operation so you may need to reset the following GL state if you use it:
null
(e.g. gl.bindFramebuffer(gl.FRAMEBUFFER, null)
)null
(e.g. gl.bindTexture(gl.TEXTURE_2D, null)
)null
(e.g. gl.bindBuffer(gl.ARRAY_BUFFER, null);
)null
(e.g. gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null)
)null
(e.g. gl.useProgram(null)
)gl.TEXTURE0
(e.g. gl.activeTexture(gl.TEXTURE0)
)gl.SCISSOR_TEST
, gl.DEPTH_TEST
, gl.BLEND
, gl.CULL_FACE
gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, false)
)gl.viewport(...)
)gl.clearColor(...)
)Generated using TypeDoc
Pipelines manage the flow of data coming in (i.e. the camera frames) through to the output from the different tracking types and computer vision algorithms.
https://docs.zap.works/universal-ar/javascript/pipelines-and-camera-processing/