added a preprocess sequence plugin

This was requested by Claire Bourlieu to ease the preprocessing.  This preprocessing could easily be done manuelly with Fiji but a bit tedious...
This commit is contained in:
Guillaume Raffy 2020-11-04 19:37:14 +01:00
parent 7b108fde5b
commit e397e3aa0f
4 changed files with 140 additions and 5 deletions

View File

@ -7,7 +7,7 @@ TEMP_PATH:=$(shell echo ~/work/lipase/tmp)
TESTS_OUTPUT_DATA_PATH:=$(TEMP_PATH)
LIB_SRC_FILES=$(shell find ./src/lipase -name "*.py")
PLUGINS_SRC_FILES=$(shell find ./src/ij-plugins -name "*.py")
LIPASE_VERSION=1.02
LIPASE_VERSION=1.03
BUILD_ROOT_PATH:=$(TEMP_PATH)/build
PACKAGE_FILE_PATH=$(BUILD_ROOT_PATH)/lipase-$(LIPASE_VERSION).zip
@ -115,7 +115,7 @@ test_globules_area: install
# on macosx : /Applications/Fiji.app/Contents/MacOS/ImageJ-macosx --ij2 --headless --run './test0001.py'
# /Applications/Fiji.app/Contents/MacOS/ImageJ-macosx --ij2 --headless --run './tests/test0001.py' "lipase_src_root_path='$(pwd)',raw_images_root_path='/Users/graffy/ownCloud/ipr/lipase/raw-images'"
echo 2 > '/tmp/test_result.txt' ; \
$(FIJI_EXE_PATH) --ij2 --headless --run './src/ij-plugins/Ipr/Lipase/Compute_Globules_Area.py' "INPUT_STACK='$RAW_IMAGES_ROOT_PATH/res_soleil2018/GGH/GGH_2018_cin2_phiG_I_327_vis_-40_1/Pos0/img_000000000_DM300_nofilter_vis_000.tif', INPUT_BACKGROUND='$RAW_IMAGES_ROOT_PATH/res_soleil2018/GGH/GGH_2018_cin2_phiG_I_327_vis_-40_1/Pos0/img_000000000_DM300_nofilter_vis_000.tif', PARTICLE_THRESHOLD='2000'" ; \
$(FIJI_EXE_PATH) --ij2 --headless --run './src/ij-plugins/Ipr/Lipase/Compute_Globules_Area.py' "INPUT_STACK='$(RAW_IMAGES_ROOT_PATH)/res_soleil2018/GGH/GGH_2018_cin2_phiG_I_327_vis_-40_1/Pos0/img_000000000_DM300_nofilter_vis_000.tif', INPUT_BACKGROUND='$(RAW_IMAGES_ROOT_PATH)/res_soleil2018/GGH/GGH_2018_cin2_phiG_I_327_vis_-40_1/Pos0/img_000000000_DM300_nofilter_vis_000.tif', PARTICLE_THRESHOLD='2000'" ; \
ERROR_CODE=$$? ; \
echo "Fiji 's return code : $$ERROR_CODE" ; \
ERROR_CODE=$$(cat '/tmp/test_result.txt') ; \

View File

@ -1,4 +1,5 @@
\documentclass[a4paper, 10pt]{article}
\documentclass[a4paper,10pt]{article}
\usepackage{lmodern} % https://tex.stackexchange.com/questions/58087/how-to-remove-the-warnings-font-shape-ot1-cmss-m-n-in-size-4-not-available
\usepackage[utf8]{inputenc}
\usepackage{graphicx}
\usepackage{subcaption}
@ -26,7 +27,7 @@
\subsubsection{Ipr/Lipase/Define raw images root}
This action will let the user choose the directory that contains the input sequences. This directory is called \texttt{raw\_images\_root\_path}, and its value is stored in the user's home directory, in a file named \texttt{\textasciitilde/.fr.univ-rennes1.ipr.lipase.json}. This \texttt{raw\_images\_root\_path} is used by some other lipase imagej plugin menu items, so it's probably the first action the user is expeted to perform. Unless the user has multiple image databases, this action only requires to be performed once.
This action will let the user choose the directory that contains the input sequences. This directory is called \texttt{raw\_images\-\_root\_path}, and its value is stored in the user's home directory, in a file named \texttt{\textasciitilde/.fr.univ-rennes1\-.ipr.lipase.json}. This \texttt{raw\_images\-\_root\_path} is used by some other lipase imagej plugin menu items, so it's probably the first action the user is expeted to perform. Unless the user has multiple image databases, this action only requires to be performed once.
The directory chosen as is expected to contain sequences of images, with the proper accompanying metadata files (\texttt{display\_and\_comments.txt} and \texttt{metadata.txt}) saved by micromanager\footnote{\url{https://micro-manager.org/}}. An example of sequence database is shown in figure \ref{fig:input_images_layout}.
@ -72,6 +73,35 @@
This action allows the user to display a sequence he interactively chooses from the catalog of sequences.
\subsubsection{Ipr/Lipase/Preprocess Sequence}
This action allows the user to preprocess a sequence by getting rid of lighting and optic artifacts, using the following equation :
\begin{equation}
P(x,y,t) = \frac{R(x,y,t) - D(x,y)}{W(x,y) - D(x,y)}
\end{equation}
where $D$ is the dark image, $W$ the white image, $R$ the raw image and $P$ the preprocessed image, as explained below.
\begin{itemize}
\item Input data:
\begin{description}
\item [input image stack] the sequence of raw images that need to be preprocessed. For each pixel location, no value is expected to be outside the range defined by dark image and white image for that location.
\item [white image] the image that would be obtained if we were looking at a 100\% reflective material under the lighting and optic conditions that were used to capture raw sequence.
\item [dark image] the image that would be obtained if we were looking at a 0\% reflective material under the lighting and optic conditions that were used to capture raw sequence.
\end{description}
\item Output data:
\begin{description}
\item [uniform sequence] the preprocessed sequence after removal of lighting and optics non uniformity. In this preprocessed sequence, each pixel value is expected to be in the range $[0.0;1.0]$.
\end{description}
\end{itemize}
\subsubsection{Ipr/Lipase/Estimate White}
This action allows the user to estimate the white image from an opened sequence. The white image represents the maximum value each pixel can have, regardless what is observed. It's an image of light and optics non-uniformity. The algorithm comes from Matlab telemosToolbox's estimatewhiteFluoImageTelemos function and is documented in its comments.
This is only guaranteed to give a good white estimate if each pixel position of the sequence is hit by the maximum of light in at least one frame in the sequence.
\subsubsection{Ipr/Lipase/Compute globules area}
This action performs a detection of globules in the given sequence, and for each image in the sequence, computes the area aof all globules. At the end of the computation, a graph showing the area of globules along time is displayed. Figure \ref{fig:trap_sequence1} shows an example sequence that can be processed with this action.
@ -253,7 +283,7 @@
\end{equation}
where $w_s$ and $w_a$ act as weighting parameters to give more importance to strong peaks or little noise along circles.
\subsubsection{Efficient computation of $g(p,r)$ and $\sigma(p,r)$}
\subsubsection{Efficient computation of \texorpdfstring{$g(p,r)$}{g(p,r)} and \texorpdfstring{$\sigma(p,r)$}{sigma(p,r)}}
$g(p,r)$ and $\sigma(p,r)$ can be computed efficiently using convolution techniques. For this, the disc area around pixels is discretised into $n_r \times n_a$ areas, $n_r$ being the number of radial zones (the radial range $\lbrack 0;R \rbrack$ around pixel $p$ is split into $n_r$ equal size zones), and $n_a$ the number of angular zones (the angular range $\lbrack 0;2\pi \rbrack$ is split int $n_a$ angular sectors).

View File

@ -0,0 +1,18 @@
# Réunion d'avancement du 04/11/2020 (par visio)
## Présents
- Véronique Vié
- Guillaume Raffy
## Compte-rendu
- graffy a fait une démonstration du tout nouveau plugin Preprocess_Sequence
- discussion des résultats du préprocessing
- le dark doit être lissé car étant très bruité, certains pixels sont capables d'atteindre la valeur du blanc, et entraînent ainsi un pixel avec une valeur infinie dans l'image prétraitée. graffy ajoutera ainsi au plugin une option pour lisser le dark
- graffy ajoutera aussi une option pour lisser l'image raw car sans ce lissage, l'image prétraitée présente un fort bruit aux endroits où le white est fable, à cause de la division.
- on s'est posé la question de savoir si le signal de fluo près des bords des pièges s'explique:
1. par une concentration forte de protéine qui se collerait aux bords
2. ou par simplement un phénomène optique du au fait que l'indice de réfraction de la solution est différe,nt de l'indice de réfraction du piège.
On penche pour l'explication 2 car l'explication 1 devrait montrer plus de de signal sur les bords les plus proches du centre que sur les bords éloignés (la protéine est censée être répartie uniformément dans le champ visuel, mais est plus éclairéeau centre); or ce n'est pas le cas
- on a rappelé que l'on ne pourrait pas déduire les volume à partir de la surface globale des globules (il nous faudrait pour cela la répartiotion statistique des rayons des globules, chose que nous n'avons pas dans l'approche globale). L'analyse fine des globules pourra par contre nous permettre d'estimer le volume des globules

View File

@ -0,0 +1,87 @@
#@ ImagePlus (label="the input image stack") INPUT_STACK
#@ ImagePlus (label="the input white image") INPUT_WHITE
#@ ImagePlus (label="the input dark image") INPUT_DARK
#@output ImagePlus PREPROCESSED_STACK
"""This script is supposed to be launched from fiji's jython interpreter
"""
#from ij import IJ # pylint: disable=import-error
#WHITE_ESTIMATE = IJ.openImage('/Users/graffy/ownCloud/ipr/lipase/lipase.git/white_estimate.tiff')
# # note: fiji's jython doesn't support encoding keyword
# https://imagej.net/Scripting_Headless
# String lipase_src_root_path
# String(label="Please enter your name",description="Name field") name
# OUTPUT String greeting
import sys
print('python version %s' % sys.version) # prints python version
from lipase.settings import UserSettings
# it is necassary to add lipase's root path to the path if run from fiji as script otherwise jython fails to find lipase's modules such as catalog
# sys.path.append(lipase_src_root_path) # pylint: disable=undefined-variable
# from lipase import Lipase, ImageLogger
from lipase.imageengine import IImageEngine, PixelType, StackImageFeeder
from lipase.imagej.ijimageengine import IJImageEngine
from lipase.telemos import WhiteEstimator
from lipase.catalog import ImageCatalog
from ij import IJ
from ij.gui import GenericDialog, DialogListener
from java.awt.event import ItemListener
# class MyListener(DialogListener):
# def dialogItemChanged(self, gd, event):
# IJ.log("Something was changed (event = %s)" % event)
# IJ.log("event's attributes : %s" % str(dir(event)))
# # Something was changed (event = java.awt.event.ItemEvent[ITEM_STATE_CHANGED,item=res_soleil2018/DARK/DARK_40X_60min_1 im pae min_1/Pos0,stateChange=SELECTED] on choice0)
# # event's attributes : ['ACTION_EVENT_MASK', 'ADJUSTMENT_EVENT_MASK', 'COMPONENT_EVENT_MASK', 'CONTAINER_EVENT_MASK', 'DESELECTED', 'FOCUS_EVENT_MASK', 'HIERARCHY_BOUNDS_EVENT_MASK', 'HIERARCHY_EVENT_MASK', 'ID', 'INPUT_METHOD_EVENT_MASK', 'INVOCATION_EVENT_MASK', 'ITEM_EVENT_MASK', 'ITEM_FIRST', 'ITEM_LAST', 'ITEM_STATE_CHANGED', 'KEY_EVENT_MASK', 'MOUSE_EVENT_MASK', 'MOUSE_MOTION_EVENT_MASK', 'MOUSE_WHEEL_EVENT_MASK', 'PAINT_EVENT_MASK', 'RESERVED_ID_MAX', 'SELECTED', 'TEXT_EVENT_MASK', 'WINDOW_EVENT_MASK', 'WINDOW_FOCUS_EVENT_MASK', 'WINDOW_STATE_EVENT_MASK', '__class__', '__copy__', '__deepcopy__', '__delattr__', '__doc__', '__ensure_finalizer__', '__eq__', '__format__', '__getattribute__', '__hash__', '__init__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__str__', '__subclasshook__', '__unicode__', 'class', 'equals', 'getClass', 'getID', 'getItem', 'getItemSelectable', 'getSource', 'getStateChange', 'hashCode', 'item', 'itemSelectable', 'notify', 'notifyAll', 'paramString', 'setSource', 'source', 'stateChange', 'toString', 'wait']
def run_script():
user_settings = UserSettings()
ie = IJImageEngine()
IImageEngine.set_instance(ie)
catalog = ImageCatalog(user_settings.raw_images_root_path)
src_hyperstack = IImageEngine.get_instance().create_hyperstack(width=1, height=1, num_channels=1, num_slices=1, num_frames=1, pixel_type=PixelType.U8)
src_hyperstack.hyperstack = INPUT_STACK
src_white = IImageEngine.get_instance().create_image(width=1, height=1, pixel_type=PixelType.F32)
src_white.ij_image = INPUT_WHITE
if src_white.get_pixel_type() != PixelType.F32:
src_white = src_white.clone(clone_pixel_type=PixelType.F32)
src_dark = IImageEngine.get_instance().create_image(width=1, height=1, pixel_type=PixelType.F32)
src_dark.ij_image = INPUT_DARK
if src_dark.get_pixel_type() != PixelType.F32:
src_dark = src_dark.clone(clone_pixel_type=PixelType.F32)
dst_preproc_stack = IImageEngine.get_instance().create_hyperstack(width=src_hyperstack.get_width(), height=src_hyperstack.get_height(), num_channels=src_hyperstack.num_channels(), num_slices=src_hyperstack.num_slices(), num_frames=src_hyperstack.num_frames(), pixel_type=PixelType.F32)
src_image_feeder = StackImageFeeder(src_hyperstack)
# dst_image_feeder = StackImageFeeder(dst_preproc_stack)
src_it = iter(src_image_feeder)
# dst_it = iter(dst_image_feeder)
frame_index = 0
for src_image in src_it:
nomin_image = ie.subtract(src_image, src_dark)
denom_image = ie.subtract(src_white, src_dark)
# preproc_image = IImageEngine.get_instance().create_image(width=src_hyperstack.get_width(), height=src_hyperstack.get_height(), pixel_type=PixelType.F32)
# preproc_image = dst_it.next()
preproc_image = ie.divide(nomin_image, denom_image)
dst_preproc_stack.set_image(preproc_image, frame_index=frame_index, slice_index=0, channel_index=0) # TODO: use frame_index, slice_index and channel_index from feeder
frame_index += 1
global PREPROCESSED_STACK
PREPROCESSED_STACK = dst_preproc_stack.hyperstack
# note : when launched from fiji, __name__ doesn't have the value "__main__", as when launched from python
run_script()