pypocquant.lib.analysis

Module Contents

Functions

get_min_dist(xy1, xy2)

Determine the minimal euclidean distance of a set of coordinates.

identify_bars_alt(peak_positions: list, profile_length: int, sensor_band_names: Tuple[str, …], expected_relative_peak_positions: Tuple[float, …], tolerance: float = 0.1)

Assign the peaks to the corresponding bar based on the known relative position in the sensor.

invert_image(image, bit_depth=8)

Inverts an image.

local_minima(array, min_distance=1)

Find all local minima of the array, separated by at least min_distance.

_find_lower_background(profile: np.ndarray, peak_index: int, lowest_bound: int, max_skip: int = 1)

This method is used by find_peak_bounds() and is not meant to be used as

_find_upper_background(profile: np.ndarray, peak_index: int, highest_bound: int, max_skip: int = 1)

This method is used by find_peak_bounds() and is not meant to be used as

find_peak_bounds(profile, border, peak_index, image_log, verbose=False)

Find the lower and upper bounds of current band.

fit_and_subtract_background(profile, border, subtract_offset=10)

Use a robust linear estimator to estimate the background of the profile and subtract it.

estimate_threshold_for_significant_peaks(profile: np.ndarray, border_x: int, thresh_factor: float)

Estimate threshold for significant peaks in sensor signal.

analyze_measurement_window(window: np.ndarray, border_x: int = 10, border_y: int = 5, thresh_factor: float = 3.0, peak_width: int = 7, sensor_band_names: Tuple[str, …] = ('igm', 'igg', 'ctl'), peak_expected_relative_location: Tuple[float, …] = (0.27, 0.55, 0.79), control_band_index: int = -1, subtract_background: bool = False, qc: bool = False, verbose: bool = False, out_qc_folder: Union[str, Path] = '', basename: str = '', image_log: list = [])

Quantify the band signal across the sensor.

extract_inverted_sensor(gray, sensor_center=(119, 471), sensor_size=(40, 190))

Returns the sensor area at the requested position without searching.

get_sensor_contour_fh(strip_gray, sensor_center, sensor_size, sensor_search_area, peak_expected_relative_location, control_band_index=-1, min_control_bar_width=7)

Extract the sensor area from the gray strip image.

extract_rotated_strip_from_box(box_gray, box)

Segments the strip from the box image and rotates it so that it is horizontal.

adapt_bounding_box(bw, x0, y0, width, height, fraction=0.75)

Make the bounding box come closer to the strip by remove bumps along the outline.

point_in_rect(point, rect)

Check if the given point (x, y) is contained in the rect (x0, y0, width, height).

get_rectangles_from_image_and_rectangle_props(img_shape, rectangle_props=(0.52, 0.15, 0.09))

Calculate the left and right rectangles to be used for the orientation

use_hough_transform_to_rotate_strip_if_needed(img_gray, rectangle_props=(0.52, 0.15, 0.09), stretch=False, img=None, qc=False)

Estimate the orientation of the strip looking at features in the area around the

use_ocr_to_rotate_strip_if_needed(img_gray, img=None, text='COVID', on_right=True)

Try reading the given text on the strip. The text is expected to be on one

read_patient_data_by_ocr(image, known_manufacturers=consts.KnownManufacturers)

Try to extract the patient data by OCR.

pypocquant.lib.analysis.get_min_dist(xy1, xy2)

Determine the minimal euclidean distance of a set of coordinates.

Parameters
  • xy1 – First set of coordinates

  • xy2 – Second set of ccordinates

Returns

Minimal distance

Return type

tuple

pypocquant.lib.analysis.identify_bars_alt(peak_positions: list, profile_length: int, sensor_band_names: Tuple[str, ], expected_relative_peak_positions: Tuple[float, ], tolerance: float = 0.1)

Assign the peaks to the corresponding bar based on the known relative position in the sensor.

Parameters
  • peak_positions – list List of absolute peak positions in pixels.

  • profile_length – Length of the profile in pixels.

  • sensor_band_names – Tuple[str, …] Tuple of sensor band names.

  • expected_relative_peak_positions – Tuple[float, …} Tuple of expected relative (0.0 -> 1.0) peak positions.

  • tolerance – Distance tolerance between pean position and expected position for assignment.

Returns

dictionary of band assignments: {bar_name: index}

pypocquant.lib.analysis.invert_image(image, bit_depth=8)

Inverts an image.

Parameters
  • image – Image to be inverted

  • bit_depth – Bit depth of image

Returns

image_inv: Inverted image.

Return type

uint8

pypocquant.lib.analysis.local_minima(array, min_distance=1)

Find all local minima of the array, separated by at least min_distance.

Parameters
  • array – Signal array

  • min_distance – Minimal distance for local minima seperation

Returns

array: Array with local minimas

Return type

np.array

pypocquant.lib.analysis._find_lower_background(profile: np.ndarray, peak_index: int, lowest_bound: int, max_skip: int = 1)

This method is used by find_peak_bounds() and is not meant to be used as a standalone method.

Parameters
  • profile (np.ndarray) – Signal profile

  • peak_index (int) – Index of the peak

  • lowest_bound (int) – Highest bound

  • max_skip (int) – Max skip

Returns

current_lower_bound: Upper bound

Returns

current_lower_background: Upper background

Returns

d_lower:

pypocquant.lib.analysis._find_upper_background(profile: np.ndarray, peak_index: int, highest_bound: int, max_skip: int = 1)

This method is used by find_peak_bounds() and is not meant to be used as a standalone method.

Parameters
  • profile (np.ndarray) – Signal profile

  • peak_index (int) – Index of the peak

  • highest_bound (int) – Highest bound

  • max_skip (int) – Max skip

Returns

current_upper_bound: Upper bound

Returns

current_upper_background: Upper background

Returns

d_upper:

pypocquant.lib.analysis.find_peak_bounds(profile, border, peak_index, image_log, verbose=False)

Find the lower and upper bounds of current band.

Parameters
  • profile (np.ndarray) – Signal profile

  • border (int) – Border offset

  • peak_index (int) – Index of the peak

  • image_log (list) – Image log

Returns

current_lower_bound: Lower bound

Returns

current_upper_bound: Upper bound

Returns

image_log: Log for this image

pypocquant.lib.analysis.fit_and_subtract_background(profile, border, subtract_offset=10)

Use a robust linear estimator to estimate the background of the profile and subtract it.

Parameters
  • profile (np.ndarray) – Signal profile

  • border (int) – Border offset

  • subtract_offset (int) – Fixed offset to be used for substraction.

Returns

profile: Background corrected profile.

Returns

background: Estimated background.

Returns

background_offset: Background offset.

pypocquant.lib.analysis.estimate_threshold_for_significant_peaks(profile: np.ndarray, border_x: int, thresh_factor: float)

Estimate threshold for significant peaks in sensor signal.

Parameters
  • profile (np.ndarray) – Signal profile

  • border_x (int) – Border offset in x

  • thresh_factor (float) – Treshold factor for estimation.

Returns

peak_threshold:

Returns

loc_min_indices

Returns

md

Returns

lowest_background_threshold

pypocquant.lib.analysis.analyze_measurement_window(window: np.ndarray, border_x: int = 10, border_y: int = 5, thresh_factor: float = 3.0, peak_width: int = 7, sensor_band_names: Tuple[str, ] = ('igm', 'igg', 'ctl'), peak_expected_relative_location: Tuple[float, ] = (0.27, 0.55, 0.79), control_band_index: int = - 1, subtract_background: bool = False, qc: bool = False, verbose: bool = False, out_qc_folder: Union[str, Path] = '', basename: str = '', image_log: list = [])

Quantify the band signal across the sensor.

Notice: the expected relative peak positions for the original strips were: [0.30, 0.52, 0.74]

Parameters
  • window (np.ndarray) – Window (image) to be analyzed.

  • border_x (int) – Border offset in x from window.

  • border_y (int) – Border offset in y from window.

  • thresh_factor (float) – Threshold factor from background.

  • peak_width (int) – Minimal width of a peak.

  • sensor_band_names ([str, ..]) – Names of the sensor bands (test lines TL).

  • peak_expected_relative_location (tuple[float, ..]) – Tuple of relative expected peak positions in respect to the window.

  • control_band_index (int) – Index of the control band for the list sensor_band_names.

  • subtract_background (bool) – Bool to substract background.

  • qc (bool) – Bool to retrun qc image.

  • verbose (bool) – Bool to return verbose logging information

  • out_qc_folder (Path) – QC image output folder

  • basename (str) – Basename

  • image_log (list) – Image log list.

Returns

merged_results: Merged results

Returns

image_log Image log

pypocquant.lib.analysis.extract_inverted_sensor(gray, sensor_center=(119, 471), sensor_size=(40, 190))

Returns the sensor area at the requested position without searching.

Parameters
  • gray – Gray image.

  • sensor_center – Sensor center coordinate on gray image.

  • sensor_size – Sensor size on gray image.

Returns

inverted_image Returns the extracted sensor on an inverted image.

pypocquant.lib.analysis.get_sensor_contour_fh(strip_gray, sensor_center, sensor_size, sensor_search_area, peak_expected_relative_location, control_band_index=- 1, min_control_bar_width=7)

Extract the sensor area from the gray strip image.

Parameters
  • strip_gray – np.ndarray Gray-value image of the extracted strip.

  • sensor_center – Tuple[int, int] Coordinates of the center of the sensor (x, y).

  • sensor_size – Tuple[int, int] Size of the sensor (width, height).

  • sensor_search_area – Tuple[int, int] Size of the sensor search area (width, height).

  • peak_expected_relative_location – list[float, …] List of expected relative peak (band) positions in the sensor (0.0 -> 1.0).

  • control_band_index – int Index of the control band in the peak_expected_relative_location. (Optional, default -1 := right-most)

  • min_control_bar_width – int Minimum width of the control bar (in pixels). (Optional, default 7)

Returns

Realigned sensor: np.ndarray

Returns

Sensor coordinates: [y0, y, x0, x]

Returns

sensor_score: score for the sensor extracted (obsolete: fixed at 1.0)

Return type

tuple

pypocquant.lib.analysis.extract_rotated_strip_from_box(box_gray, box)

Segments the strip from the box image and rotates it so that it is horizontal.

Parameters
  • box_gray – Gray image of QR code box containing strip

  • box – RGB image of QR code box containing strip

Returns

strip_gray Extracted gray strip from box

Returns

strip Extracted RGB strip from box

pypocquant.lib.analysis.adapt_bounding_box(bw, x0, y0, width, height, fraction=0.75)

Make the bounding box come closer to the strip by remove bumps along the outline.

Parameters
  • bw – Binary mask of an image.

  • x0 – Top left corner in x.

  • y0 – Top left corner in y

  • width – Mask width

  • height – Mask height

  • fraction

Returns

new_y0:

Returns

new_y:

Returns

new_x0:

Returns

new_x:

pypocquant.lib.analysis.point_in_rect(point, rect)

Check if the given point (x, y) is contained in the rect (x0, y0, width, height).

Parameters

point – Point to be checked if in rectangle

:param rect

Rectangle defined by (x0, y0, width, height)

returns bool: :rtype: bool

pypocquant.lib.analysis.get_rectangles_from_image_and_rectangle_props(img_shape, rectangle_props=(0.52, 0.15, 0.09))

Calculate the left and right rectangles to be used for the orientation analysis using the Hough transform.

Parameters

img_shape – tuple

Image shape (width, height)

Parameters

rectangle_props

tuple Tuple containing information about the relative position of the two rectangles to be searched for the inlet on both sides of the center of the image:

rectangle_props[0]: relative (0..1) vertical height of the rectangle with

respect to the image height.

rectangle_props[1]: relative distance of the left edge of the right rectangle

with respect to the center of the image.

rectangle_props[2]: relative distance of the left edge of the left rectangle

with respect to the center of the image.

Returns

left_rect: Left rectangles

Returns

right_rect: Right rectangles

Return type

tuple

pypocquant.lib.analysis.use_hough_transform_to_rotate_strip_if_needed(img_gray, rectangle_props=(0.52, 0.15, 0.09), stretch=False, img=None, qc=False)

Estimate the orientation of the strip looking at features in the area around the expected sensor position. If the orientation is estimated to be wrong, rotate the strip.

Parameters
  • img_gray – np.ndarray Gray-scale image to be analyzed.

  • rectangle_props

    tuple Tuple containing information about the relative position of the two rectangles to be searched for the inlet on both sides of the center of the image:

    rectangle_props[0]: relative (0..1) vertical height of the rectangle with

    respect to the image height.

    rectangle_props[1]: relative distance of the left edge of the right rectangle

    with respect to the center of the image.

    rectangle_props[2]: relative distance of the left edge of the left rectangle

    with respect to the center of the image.

  • stretch – bool Set to True to apply auto-stretch to the image for Hough detection (1, 99 percentile). The original image will be rotated, if needed.

  • img – np.ndarray or None (default) Apply correction also to this image, if passed.

  • qc – bool If True, create quality control images.

Returns

img_gray: Gray image.

Returns

img: RGB Image.

Returns

qc_image QC imageg.

Returns

rotated Bool; true if was rotated

Returns

left_rect: Left rectangles

Returns

right_rect: Right rectangles

Return type

tuple

pypocquant.lib.analysis.use_ocr_to_rotate_strip_if_needed(img_gray, img=None, text='COVID', on_right=True)

Try reading the given text on the strip. The text is expected to be on one side of the strip; if it is found on the other side, rotate the strip.

We apply the same rotation also to the second image, if passed.

Parameters
  • img_gray – Gray input image to be potentially rotated.

  • img – RGB input image to be potentially rotated.

  • text – Text to be identified by OCR.

  • on_right – Position of text to be identified in respect to the strip orientation.

Returns

img_gray: Gray image.

Returns

img: RGB Image.

Returns

rotated Bool; true if was rotated

pypocquant.lib.analysis.read_patient_data_by_ocr(image, known_manufacturers=consts.KnownManufacturers)

Try to extract the patient data by OCR.

Parameters
  • image – Input image to be read with OCR.

  • known_manufacturers – List with known manufacturers.

Returns

fid: FID number.

Returns

manufacturer: manufacturer name.