Image preparation

Functions used to normalize, format, cast, project or filter images.

Normalize images

Rescale or contrast pixel intensity:

bigfish.stack.rescale(tensor, channel_to_stretch=None, stretching_percentile=99.9)

Rescale tensor values up to its dtype range (unsigned/signed integers) or between 0 and 1 (float).

Each round and each channel is rescaled independently. Tensor has between 2 to 5 dimensions, in the following order: (round, channel, z, y, x).

By default, we rescale the tensor intensity range to its dtype range (or between 0 and 1 for float tensor). We can improve the contrast by stretching a smaller range of pixel intensity: between the minimum value of a channel and percentile value of the channel (cf. stretching_percentile).

To be consistent with skimage, 64-bit (unsigned) integer images are not supported.

Parameters:
tensornp.ndarray

Tensor to rescale.

channel_to_stretchint, List[int] or Tuple[int]

Channel to stretch. If None, minimum and maximum of each channel are used as the intensity range to rescale.

stretching_percentilefloat or int

Percentile to determine the maximum intensity value used to rescale the image. If 1, the maximum pixel intensity is used to rescale the image.

Returns:
tensornp.ndarray

Tensor rescaled.

bigfish.stack.compute_image_standardization(image)

Normalize image by computing its z score.

Parameters:
imagenp.ndarray

Image to normalize with shape (y, x).

Returns:
normalized_imagenp.ndarray

Normalized image with shape (y, x).


Format images

Resize and pad images:

bigfish.stack.resize_image(image, output_shape, method='bilinear')

Resize an image with bilinear interpolation or nearest neighbor method.

Parameters:
imagenp.ndarray

Image to resize.

output_shapeTuple[int]

Shape of the resized image.

methodstr

Interpolation method to use.

Returns:
image_resizednp.ndarray

Resized image.

bigfish.stack.get_marge_padding(height, width, x)

Pad image to make its shape a multiple of x.

Parameters:
heightint

Original height of the image.

widthint

Original width of the image.

xint

Padded image have a height and width multiple of x.

Returns:
marge_paddingList[List]

List of lists with the format [[marge_height_t, marge_height_b], [marge_width_l, marge_width_r]].


Cast images

Cast images to a specified dtype (with respect to the image range of values):

bigfish.stack.cast_img_uint8(tensor)

Cast the image in np.uint8 and scale values between 0 and 255.

Negative values are not allowed as the skimage method img_as_ubyte would clip them to 0. Positives values are scaled between 0 and 255, excepted if they fit directly in 8 bit (in this case values are not modified).

Parameters:
tensornp.ndarray

Image to cast.

Returns:
tensornp.ndarray, np.uint8

Image cast.

bigfish.stack.cast_img_uint16(tensor)

Cast the data in np.uint16.

Negative values are not allowed as the skimage method img_as_uint would clip them to 0. Positives values are scaled between 0 and 65535, excepted if they fit directly in 16 bit (in this case values are not modified).

Parameters:
tensornp.ndarray

Image to cast.

Returns:
tensornp.ndarray, np.uint16

Image cast.

bigfish.stack.cast_img_float32(tensor)

Cast the data in np.float32.

If the input data is in (unsigned) integer, the values are scaled between 0 and 1. When converting from a np.float dtype, values are not modified.

Parameters:
tensornp.ndarray

Image to cast.

Returns:
tensornp.ndarray, np.float32

image cast.

bigfish.stack.cast_img_float64(tensor)

Cast the data in np.float64.

If the input data is in (unsigned) integer, the values are scaled between 0 and 1. When converting from a np.float dtype, values are not modified.

Parameters:
tensornp.ndarray

Tensor to cast.

Returns:
tensornp.ndarray, np.float64

Tensor cast.


Filter images

Apply filtering transformations:

Use Laplacian of Gaussian (LoG) filter to enhance peak signals and denoise the rest of the image:

Use blurring filters with large kernel to estimate and remove background signal:

bigfish.stack.mean_filter(image, kernel_shape, kernel_size)

Apply a mean filter to a 2-d through convolution filter.

Parameters:
imagenp.ndarray, np.uint or np.float

Image with shape (y, x).

kernel_shapestr

Shape of the kernel used to compute the filter (diamond, disk, rectangle or square).

kernel_sizeint, Tuple(int) or List(int)

The size of the kernel. For the rectangle we expect two integers (height, width).

Returns:
image_filterednp.ndarray, np.uint

Filtered 2-d image with shape (y, x).

bigfish.stack.median_filter(image, kernel_shape, kernel_size)

Apply a median filter to a 2-d image.

Parameters:
imagenp.ndarray, np.uint

Image with shape (y, x).

kernel_shapestr

Shape of the kernel used to compute the filter (diamond, disk, rectangle or square).

kernel_sizeint, Tuple(int) or List(int)

The size of the kernel. For the rectangle we expect two integers (height, width).

Returns:
image_filterednp.ndarray, np.uint

Filtered 2-d image with shape (y, x).

bigfish.stack.gaussian_filter(image, sigma, allow_negative=False)

Apply a Gaussian filter to a 2-d or 3-d image.

Parameters:
imagenp.ndarray

Image with shape (z, y, x) or (y, x).

sigmaint, float, Tuple(float, int) or List(float, int)

Standard deviation used for the gaussian kernel (one for each dimension). If it’s a scalar, the same standard deviation is applied to every dimensions.

allow_negativebool

Allow negative values after the filtering or clip them to 0. Not compatible with unsigned integer images.

Returns:
image_filterednp.ndarray

Filtered image.

bigfish.stack.maximum_filter(image, kernel_shape, kernel_size)

Apply a maximum filter to a 2-d image.

Parameters:
imagenp.ndarray, np.uint

Image with shape (y, x).

kernel_shapestr

Shape of the kernel used to compute the filter (diamond, disk, rectangle or square).

kernel_sizeint, Tuple(int) or List(int)

The size of the kernel. For the rectangle we expect two integers (height, width).

Returns:
image_filterednp.ndarray, np.uint

Filtered 2-d image with shape (y, x).

bigfish.stack.minimum_filter(image, kernel_shape, kernel_size)

Apply a minimum filter to a 2-d image.

Parameters:
imagenp.ndarray, np.uint

Image with shape (y, x).

kernel_shapestr

Shape of the kernel used to compute the filter (diamond, disk, rectangle or square).

kernel_sizeint, Tuple(int) or List(int)

The size of the kernel. For the rectangle we expect two integers (height, width).

Returns:
image_filterednp.ndarray, np.uint

Filtered 2-d image with shape (y, x).

bigfish.stack.dilation_filter(image, kernel_shape=None, kernel_size=None)

Apply a dilation to a 2-d image.

Parameters:
imagenp.ndarray

Image with shape (y, x).

kernel_shapestr

Shape of the kernel used to compute the filter (diamond, disk, rectangle or square). If None, use cross-shaped structuring element (connectivity=1).

kernel_sizeint, Tuple(int) or List(int)

The size of the kernel. For the rectangle we expect two integers (height, width). If None, use cross-shaped structuring element (connectivity=1).

Returns:
image_filterednp.ndarray

Filtered 2-d image with shape (y, x).

bigfish.stack.erosion_filter(image, kernel_shape=None, kernel_size=None)

Apply an erosion to a 2-d image.

Parameters:
imagenp.ndarray

Image with shape (y, x).

kernel_shapestr

Shape of the kernel used to compute the filter (diamond, disk, rectangle or square). If None, use cross-shaped structuring element (connectivity=1).

kernel_sizeint, Tuple(int) or List(int)

The size of the kernel. For the rectangle we expect two integers (height, width). If None, use cross-shaped structuring element (connectivity=1).

Returns:
image_filterednp.ndarray

Filtered 2-d image with shape (y, x).

bigfish.stack.log_filter(image, sigma)

Apply a Laplacian of Gaussian filter to a 2-d or 3-d image.

The function returns the inverse of the filtered image such that the pixels with the highest intensity from the original (smoothed) image have positive values. Those with a low intensity returning a negative value are clipped to zero.

Parameters:
imagenp.ndarray

Image with shape (z, y, x) or (y, x).

sigmaint, float, Tuple(float, int) or List(float, int)

Standard deviation used for the gaussian kernel (one for each dimension). If it’s a scalar, the same standard deviation is applied to every dimensions.

Returns:
image_filterednp.ndarray

Filtered image.

bigfish.stack.remove_background_mean(image, kernel_shape='disk', kernel_size=200)

Remove background noise from a 2-d image, subtracting a mean filtering.

Parameters:
imagenp.ndarray, np.uint

Image to process with shape (y, x).

kernel_shapestr

Shape of the kernel used to compute the filter (diamond, disk, rectangle or square).

kernel_sizeint, Tuple(int) or List(int)

The size of the kernel. For the rectangle we expect two integers (height, width).

Returns:
image_without_backnp.ndarray, np.uint

Image processed.

bigfish.stack.remove_background_gaussian(image, sigma)

Remove background noise from a 2-d or 3-d image, subtracting a gaussian filtering.

Parameters:
imagenp.ndarray

Image to process with shape (z, y, x) or (y, x).

sigmaint, float, Tuple(float, int) or List(float, int)

Standard deviation used for the gaussian kernel (one for each dimension). If it’s a scalar, the same standard deviation is applied to every dimensions.

Returns:
image_no_backgroundnp.ndarray

Image processed with shape (z, y, x) or (y, x).


Project images in 2D

Build a 2D projection by computing the maximum, mean or median values:

bigfish.stack.maximum_projection(image)

Project the z-dimension of an image, keeping the maximum intensity of each yx pixel.

Parameters:
imagenp.ndarray

A 3-d image with shape (z, y, x).

Returns:
projected_imagenp.ndarray

A 2-d image with shape (y, x).

bigfish.stack.mean_projection(image, return_float=False)

Project the z-dimension of a image, computing the mean intensity of each yx pixel.

Parameters:
imagenp.ndarray

A 3-d tensor with shape (z, y, x).

return_floatbool, default=False

Return a (potentially more accurate) float array.

Returns:
projected_imagenp.ndarray

A 2-d image with shape (y, x).

bigfish.stack.median_projection(image)

Project the z-dimension of a image, computing the median intensity of each yx pixel.

Parameters:
imagenp.ndarray

A 3-d image with shape (z, y, x).

Returns:
projected_imagenp.ndarray

A 2-d image with shape (y, x).


Clean out-of-focus pixels

Compute a pixel-wise focus score:

Remove the out-of-focus z-slices of a 3D image:

Build a 2D projection by removing the out-of-focus z-slices/pixels:

bigfish.stack.compute_focus(image, neighborhood_size=31)

Helmli and Scherer’s mean method is used as a focus metric.

For each pixel yx in a 2-d image, we compute the ratio:

\[\begin{split}R(y, x) = \left \{ \begin{array}{rcl} \frac{I(y, x)}{\mu(y, x)} & \mbox{if} & I(y, x) \ge \mu(y, x) \\ \frac{\mu(y, x)}{I(y, x)} & \mbox{otherwise} & \end{array} \right.\end{split}\]

with \(I(y, x)\) the intensity of the pixel yx and \(\mu(y, x)\) the mean intensity of the pixels in its neighborhood.

For a 3-d image, we compute this metric for each z surface.

Parameters:
imagenp.ndarray

A 2-d or 3-d image with shape (y, x) or (z, y, x).

neighborhood_sizeint or tuple or list, default=31

The size of the square used to define the neighborhood of each pixel. An odd value is preferred. To define a rectangular neighborhood, a tuple or a list with two elements (height, width) can be provided.

Returns:
focusnp.ndarray, np.float64

A 2-d or 3-d tensor with the R(y, x) computed for each pixel of the original image.

bigfish.stack.in_focus_selection(image, focus, proportion)

Select and keep the 2-d slices with the highest level of focus.

Helmli and Scherer’s mean method is used as a focus metric.

Parameters:
imagenp.ndarray

A 3-d tensor with shape (z, y, x).

focusnp.ndarray, np.float64

A 3-d tensor with a focus metric computed for each pixel of the original image. See bigfish.stack.compute_focus().

proportionfloat or int

Proportion of z-slices to keep (float between 0 and 1) or number of z-slices to keep (positive integer).

Returns:
in_focus_imagenp.ndarray

A 3-d tensor with shape (z_in_focus, y, x), with out-of-focus z-slice removed.

bigfish.stack.get_in_focus_indices(focus, proportion)

Select the best in-focus z-slices.

Helmli and Scherer’s mean method is used as a focus metric.

Parameters:
focusnp.ndarray, np.float

A 3-d tensor with a focus metric computed for each pixel of the original image. See bigfish.stack.compute_focus().

proportionfloat or int

Proportion of z-slices to keep (float between 0 and 1) or number of z-slices to keep (positive integer).

Returns:
indices_to_keepList[int]

Indices of slices with the best focus score.

bigfish.stack.focus_projection(image, proportion=0.75, neighborhood_size=7, method='median')

Project the z-dimension of an image.

Inspired from Samacoits Aubin’s thesis (part 5.3, strategy 5). Compare to the original algorithm we use the same focus measures to select the in-focus z-slices and project our image.

  1. Compute a focus score for each pixel yx with a fixed neighborhood size.

  2. We keep a proportion of z-slices with the highest average focus score.

  3. Keep the median/maximum pixel intensity among the top 5 z-slices (at most) with the highest focus score.

Parameters:
imagenp.ndarray

A 3-d image with shape (z, y, x).

proportionfloat or int, default=0.75

Proportion of z-slices to keep (float between 0 and 1) or number of z-slices to keep (positive integer).

neighborhood_sizeint or tuple or list, default=7

The size of the square used to define the neighborhood of each pixel. An odd value is preferred. To define a rectangular neighborhood, a tuple or a list with two elements (height, width) can be provided.

method{median, max}, default=`median`

Projection method applied on the selected pixel values.

Returns:
projected_imagenp.ndarray

A 2-d image with shape (y, x).