Arithmetic function
dip_Error dip_Sign ( in, out )
binary, integer, float
Computes the sign of the input image values, and outputs a signed integer typed image. The sign of zero is defined as zero.
Data type | Name | Description |
dip_Image | in | Input |
dip_Image | out | Output |