Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Kinect for Windows 1.8
Processes the specified depth frame through the Kinect Fusion pipeline.
Syntax
public bool ProcessFrame (
FusionFloatImageFrame depthFloatFrame,
int maxAlignIterationCount,
int maxIntegrationWeight,
Matrix4 worldToCameraTransform
)
Parameters
- depthFloatFrame
Type: FusionFloatImageFrame
The depth float frame to be processed. The maximum resolution of this frame is 640×480. - maxAlignIterationCount
Type: Int32
The maximum number of iterations of the algorithm to run. The minimum value is one. Using only a small number of iterations will have a faster run time, but the algorithm may not converge to the correct transformation. - maxIntegrationWeight
Type: Int32
A parameter to control the temporal smoothing of depth integration. The minimum value is one. Lower values have more noisy representations, but are suitable for more dynamic environments because moving objects integrate and disintegrate faster. Higher values integrate objects more slowly, but provide finer detail with less noise. - worldToCameraTransform
Type: Matrix4
The best guess at the current camera pose. This is usually the camera pose result from the most recent call to the FusionDepthProcessor.AlignPointClouds or ColorReconstruction.AlignDepthFloatToReconstruction method.
Return Value
Type: Boolean
Returns true if successful; returns false if the algorithm encountered a problem aligning the input depth image and could not calculate a valid transformation.
Remarks
This method is equivalent to calling the AlignDepthFloatToReconstruction and IntegrateFrame methods on the specified depth frame. You can call these low-level methods individually to have more control over the operation, but calling ProcessFrame will complete faster due to the integrated nature of the calls.
Note
If a tracking error occurs during the AlignDepthFloatToReconstruction call, no depth data integration will be performed and the camera pose will remain unchanged.
If you need a visible output image of the reconstruction, call the CalculatePointCloud method and then call the FusionDepthProcessor.ShadePointCloud method.
Requirements
Namespace: Microsoft.Kinect.Toolkit.Fusion
Assembly: Microsoft.Kinect.Toolkit.Fusion (in microsoft.kinect.toolkit.fusion.dll)
See Also
Reference
ColorReconstruction Class
ColorReconstruction Members
Microsoft.Kinect.Toolkit.Fusion Namespace