377 lines
23 KiB
TeX
377 lines
23 KiB
TeX
\documentclass[a4paper,10pt]{article}
|
|
\usepackage{lmodern} % https://tex.stackexchange.com/questions/58087/how-to-remove-the-warnings-font-shape-ot1-cmss-m-n-in-size-4-not-available
|
|
\usepackage[utf8]{inputenc}
|
|
\usepackage{graphicx}
|
|
\usepackage{subcaption}
|
|
\usepackage[htt]{hyphenat} % allow hyphen inside texttt to avoid overfull hbox warnings
|
|
\usepackage[english, french]{babel}
|
|
\usepackage[margin=0.5in]{geometry} % default margins are too big for my taste: too much wasted space http://kb.mit.edu/confluence/pages/viewpage.action?pageId=3907057
|
|
\usepackage{amsmath} % provides underset
|
|
\usepackage{dirtree} % provides \dirtree
|
|
\usepackage{hyperref} % provides \url
|
|
\hyphenation{tu-yau}
|
|
|
|
|
|
\title{lipase manual}
|
|
\author{Guillaume Raffy \and Véronique Vié }
|
|
\begin{document}
|
|
\selectlanguage{english}
|
|
|
|
\maketitle
|
|
|
|
\section{lipase imagej plugin user guide}
|
|
|
|
This section describes how to use the lipase imagej plugin.
|
|
|
|
\subsection{lipase imagej plugin menu items}
|
|
|
|
\subsubsection{Ipr/Lipase/Define raw images root}
|
|
|
|
This action will let the user choose the directory that contains the input sequences. This directory is called \texttt{raw\_images\-\_root\_path}, and its value is stored in the user's home directory, in a file named \texttt{\textasciitilde/.fr.univ-rennes1\-.ipr.lipase.json}. This \texttt{raw\_images\-\_root\_path} is used by some other lipase imagej plugin menu items, so it's probably the first action the user is expeted to perform. Unless the user has multiple image databases, this action only requires to be performed once.
|
|
|
|
The directory chosen as is expected to contain sequences of images, with the proper accompanying metadata files (\texttt{display\_and\_comments.txt} and \texttt{metadata.txt}) saved by micromanager\footnote{\url{https://micro-manager.org/}}. An example of sequence database is shown in figure \ref{fig:input_images_layout}.
|
|
|
|
\begin{figure}[htbp]
|
|
\dirtree{%
|
|
.1 \texttt<raw\_images\_root\_path>.
|
|
.2 res\_soleil2018.
|
|
.3 dark.
|
|
.4 dark\_40x\_60min\_1 im pae min\_1.
|
|
.5 pos0.
|
|
.6 ...
|
|
.6 metadata.txt.
|
|
.5 display\_and\_comments.txt.
|
|
.4 dark\_40x\_zstack\_vis\_327-353\_1.
|
|
.5 pos0.
|
|
.6 ...
|
|
.6 metadata.txt.
|
|
.5 display\_and\_comments.txt.
|
|
.3 ggh.
|
|
.4 ggh\_2018\_cin2\_phig\_l\_327\_vis\_-40\_1.
|
|
.5 pos0.
|
|
.6 img\_0000000000\_dm300\_327-353\_fluo\_000.tif.
|
|
.6 img\_0000000001\_dm300\_327-353\_fluo\_000.tif.
|
|
.6 ...
|
|
.6 img\_0000000039\_dm300\_327-353\_fluo\_000.tif.
|
|
.6 img\_0000000000\_dm300\_nofilter\_vis\_000.tif.
|
|
.6 img\_0000000001\_dm300\_nofilter\_vis\_000.tif.
|
|
.6 ...
|
|
.6 img\_0000000039\_dm300\_nofilter\_vis\_000.tif.
|
|
.6 metadata.txt.
|
|
.5 pos1.
|
|
.6 ...
|
|
.6 metadata.txt.
|
|
.5 display\_and\_comments.txt.
|
|
.3 white.
|
|
.4 ...
|
|
}
|
|
\caption{example of input images database}
|
|
\label{fig:input_images_layout}
|
|
\end{figure}
|
|
|
|
\subsubsection{Ipr/Lipase/Display Sequence}
|
|
|
|
This action allows the user to display a sequence he interactively chooses from the catalog of sequences.
|
|
|
|
\subsubsection{Ipr/Lipase/Preprocess Sequence}
|
|
|
|
This action allows the user to preprocess a sequence by getting rid of lighting and optic artifacts, using the following equation :
|
|
|
|
\begin{equation}
|
|
P(x,y,t) = \frac{R(x,y,t) - D(x,y)}{W(x,y) - D(x,y)}
|
|
\end{equation}
|
|
|
|
where $D$ is the dark image, $W$ the white image, $R$ the raw image and $P$ the preprocessed image, as explained below.
|
|
|
|
\begin{itemize}
|
|
\item Input data:
|
|
\begin{description}
|
|
\item [input image stack] the sequence of raw images that need to be preprocessed. For each pixel location, no value is expected to be outside the range defined by dark image and white image for that location.
|
|
\item [white image] the image that would be obtained if we were looking at a 100\% reflective material under the lighting and optic conditions that were used to capture raw sequence.
|
|
\item [dark image] the image that would be obtained if we were looking at a 0\% reflective material under the lighting and optic conditions that were used to capture raw sequence.
|
|
\end{description}
|
|
\item Output data:
|
|
\begin{description}
|
|
\item [uniform sequence] the preprocessed sequence after removal of lighting and optics non uniformity. In this preprocessed sequence, each pixel value is expected to be in the range $[0.0;1.0]$.
|
|
\end{description}
|
|
\end{itemize}
|
|
|
|
\subsubsection{Ipr/Lipase/Estimate White}
|
|
|
|
This action allows the user to estimate the white image from an opened sequence. The white image represents the maximum value each pixel can have, regardless what is observed. It's an image of light and optics non-uniformity. The algorithm comes from Matlab telemosToolbox's estimatewhiteFluoImageTelemos function and is documented in its comments.
|
|
|
|
This is only guaranteed to give a good white estimate if each pixel position of the sequence is hit by the maximum of light in at least one frame in the sequence.
|
|
|
|
\subsubsection{Ipr/Lipase/Compute globules area}
|
|
|
|
This action performs a detection of globules in the given sequence, and for each image in the sequence, computes the area aof all globules. At the end of the computation, a graph showing the area of globules along time is displayed. Figure \ref{fig:trap_sequence1} shows an example sequence that can be processed with this action.
|
|
|
|
\begin{itemize}
|
|
\item Input data:
|
|
\begin{description}
|
|
\item [input image stack] the sequence of images containing particle-like globules evolving with time over a static background.
|
|
\item [background image] the image that will be used as a background image. This background image is expected to contain everything but the particles that we want to detect. If in your sequence, one of the images has no particle at all, it could be used as a background image.
|
|
\item [particle threshold] the threshold used to detect particles, expressed in grey levels.
|
|
\begin{itemize}
|
|
\item if set too high, the area of particles will be underestimated, as some particles will be either undetected or detected smaller than they actually are.
|
|
\item if set too low, the area of particles will be overestimated, as some of the background will be wrongly detected as particles, because the background is never completely static (noise, changes in illumination, etc.).
|
|
\end{itemize}
|
|
\end{description}
|
|
\item Output data:
|
|
\begin{description}
|
|
\item [area over time] this 1D-data shows the evolution of the detected globules area for each frame in the sequence. Note that the globules area is expressed as a coverage ratio of the detected globules in each image (its value is therefore in the range $[0;1]$).
|
|
\end{description}
|
|
\end{itemize}
|
|
|
|
Here's how it works:
|
|
\begin{itemize}
|
|
\item the sequence \texttt{diff} is computed by sustracting \texttt{background image} from \texttt{input image stack}
|
|
\item the sequence \texttt{abs\_diff} is computed as the absolute value of \texttt{diff}
|
|
\item the sequence \texttt{is\_globule} is computed by performing a threshold operator on \texttt{abs\_diff} with the threshold value {particle threshold}
|
|
\item \texttt{area over time} is computed by counting the number of non zero pixel values in each frame of \texttt{is\_globule}
|
|
\end{itemize}
|
|
|
|
\subsubsection{Ipr/Lipase/Radial profiles}
|
|
|
|
This imagej plugin computes $g(p, r)$ and $\sigma(p, r)$ of the method described in section \ref{sec:circularness}
|
|
|
|
\begin{itemize}
|
|
\item Input data:
|
|
\begin{description}
|
|
\item [input image] an image containing particle-like globules
|
|
\item [maximal radius of globules] the maximum size of globule we expect to have in the input image, expressed in pixels. (the value of $R$ descibed in section \ref{sec:circularness}')
|
|
\item [number of radial sectors] (the value of $n_r$ described in section \ref{sec:circularness}')
|
|
\item [number of angular sectors] (the value of $n_a$ described in section \ref{sec:circularness}')
|
|
\end{description}
|
|
\item Output data:
|
|
\begin{description}
|
|
\item [$g(p, r)$] this is returned as an hyperstack, where the channel axis is used for the different values of $r$
|
|
\item [$\sigma(p, r)$] this is returned as an hyperstack, where the channel axis is used for the different values of $r$
|
|
\end{description}
|
|
\end{itemize}
|
|
|
|
\subsubsection{Ipr/Lipase/Detect globules}
|
|
|
|
This imagej plugin detects circular shaped particules in the input image, using the method described in section \ref{sec:circularness}
|
|
|
|
\begin{itemize}
|
|
\item Input data:
|
|
\begin{description}
|
|
\item [input image] an image containing particle-like globules
|
|
\item [mask image] image containing non-zero values for pixels that should be ignored. This is useful to ignore areas in the image that we know can't contain globules.
|
|
\item [maximal radius of globules] the maximum size of globule we expect to have in the input image, expressed in pixels. (the value of $R$ descibed in section \ref{sec:circularness}')
|
|
\item [number of radial sectors] (the value of $n_r$ described in section \ref{sec:circularness}')
|
|
\item [number of angular sectors] (the value of $n_a$ described in section \ref{sec:circularness}')
|
|
\item [max finder parameters] the max finder is used to detect peaks in the circularness image. See section \ref{sec:max_finder} for this algorithm and the meaning of its parameters.
|
|
\end{description}
|
|
\item Output data:
|
|
\begin{description}
|
|
\item [the position of detected globules] they are returned in the form of an imagej table and displayed as crosses in the selection layer of the input image.
|
|
\end{description}
|
|
\end{itemize}
|
|
|
|
|
|
|
|
\section{catalog images}
|
|
|
|
Images have been acquired on the telemos microscope\footnote{\url{https://www6.inrae.fr/pfl-cepia/content/download/3542/34651/version/1/file/notes_TELEMOS.pdf}}.
|
|
|
|
|
|
image prefix :
|
|
\selectlanguage{french}
|
|
\begin{description}
|
|
\item[AF]
|
|
\item[blé] coupes de blé
|
|
\item[CA] coupe d'amande
|
|
\item[FE] feuille d'épinard
|
|
\item[GGH] globule gras humain
|
|
\item[CRF] chloroplastes de feuille d'épinard
|
|
\item[OL] oléosome
|
|
\item[DARK] dark
|
|
\item[white]
|
|
\end{description}
|
|
|
|
\begin{description}
|
|
\item[cin1] cinétique 1
|
|
\begin{description}
|
|
\item[\texttt{phiG\_40x\_1}] cinétique avant et après injection enzyme gastrique
|
|
\item[\texttt{phiG\_40x\_Zstack20um\_1}] stack
|
|
\end{description}
|
|
|
|
\begin{tabular}{l|r|p{0.4\textwidth}}
|
|
file name & time & action \\
|
|
\hline
|
|
\texttt{phiG\_40x\_1} & 0 mn & on commence à enregistrer et on attend 10mn (pour le bleaching)\\
|
|
& 10 mn & debut injection phase gastrique (poussée) \\
|
|
& 13 mn & la phase gastrique (le petit tuyau contient $20 \mu l$) arrive dans la cellule d'un coup (1 nanol) \\
|
|
& 15 mn & on arrête l'injection \\
|
|
\cline{1-1} \texttt{phiG\_40x\_Zstack20um\_1} & 50 mn & on fait un stack\\
|
|
\cline{1-1} \texttt{phiG\_I\_40x\_1} & 51 mn & début d'injection phase intestinale (poussée)\\
|
|
& x mn & on arrête l'injection \\
|
|
\cline{1-1} \texttt{phiG\_I\_40x\_Zstack20um\_1} & 90 mn & on fait un stack
|
|
\end{tabular}
|
|
|
|
|
|
\item[cin2] autre échantillon similaire à cin1
|
|
\item[cond5678] condition non réalistes
|
|
\end{description}
|
|
\selectlanguage{english}
|
|
|
|
\section {Algorithms}
|
|
|
|
\subsection{computing background image for trap sequences}
|
|
|
|
Trap sequences show traps at fixed positions with particles that move over time, as shown in figure \ref{fig:trap_sequence1}. In order to detect the particles, we can subtract from each image a background image, which is an image of the scene without any particle.
|
|
|
|
If we suppose that particles are moving fast enough, we can estimate this background image $B$, as :
|
|
|
|
\begin{equation}
|
|
B(x,y) = \underset{t\in {1 \ldots T_{max}}}{\mathrm{median}} \{I(x,y,t)\}
|
|
\end{equation}
|
|
where $I(x,y,t)$ is the value of the input sequence at time $t$ and on pixel position $(x,y)$ and $T_{max}$ is the number of frames in the sequence.
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/res_soleil2018_GGH_GGH_2018_cin2_phiG_I_327_vis_-40_1_Pos0_img_000000000_DM300_nofilter_vis_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 0}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
~
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/res_soleil2018_GGH_GGH_2018_cin2_phiG_I_327_vis_-40_1_Pos0_img_000000019_DM300_nofilter_vis_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 19}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
~
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/res_soleil2018_GGH_GGH_2018_cin2_phiG_I_327_vis_-40_1_Pos0_img_000000039_DM300_nofilter_vis_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 39}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
|
|
|
|
\caption{Example of trap sequence (\texttt{res\_soleil2018/GGH/GGH\_2018\_cin2\_phiG\_I\_327\_vis\_-40\_1/Pos0})}
|
|
\label{fig:trap_sequence1}
|
|
\end{figure}
|
|
|
|
\begin{figure}
|
|
\centering
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/soleil2016_GGHL_rDGL_SGF55_lambda_Em_cinsuite_1_Pos0_img_000000000_DM300_nofilter_vis_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 0}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
~
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/soleil2016_GGHL_rDGL_SGF55_lambda_Em_cinsuite_1_Pos0_img_000000019_DM300_nofilter_vis_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 19}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
~
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/soleil2016_GGHL_rDGL_SGF55_lambda_Em_cinsuite_1_Pos0_img_000000039_DM300_nofilter_vis_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 39}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
\\
|
|
\centering
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/soleil2016_GGHL_rDGL_SGF55_lambda_Em_cinsuite_1_Pos0_img_000000000_DM300_327-353_fluo_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 0, 327nm - 353nm}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
~
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/soleil2016_GGHL_rDGL_SGF55_lambda_Em_cinsuite_1_Pos0_img_000000019_DM300_327-353_fluo_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 19, 327nm - 353nm}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
~
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/soleil2016_GGHL_rDGL_SGF55_lambda_Em_cinsuite_1_Pos0_img_000000039_DM300_327-353_fluo_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 39, 327nm - 353nm}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
\\
|
|
\centering
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/soleil2016_GGHL_rDGL_SGF55_lambda_Em_cinsuite_1_Pos0_img_000000000_DM300_420-480_fluo_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 0, 420nm - 480nm}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
~
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/soleil2016_GGHL_rDGL_SGF55_lambda_Em_cinsuite_1_Pos0_img_000000019_DM300_420-480_fluo_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 19, 420nm - 480nm}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
~
|
|
\begin{subfigure}[b]{0.3\textwidth}
|
|
\includegraphics[width=1.0\textwidth]{graphics/soleil2016_GGHL_rDGL_SGF55_lambda_Em_cinsuite_1_Pos0_img_000000039_DM300_420-480_fluo_000.png}
|
|
%\includegraphics[width=\textwidth]{1.png}
|
|
\caption{Frame 39, 420nm - 480nm}
|
|
%\label{fig:1}
|
|
\end{subfigure}
|
|
|
|
\caption{Example of trap sequence (\texttt{soleil2016/GGHL\_rDGL\_SGF55\_lambda\_Em\_cinsuite\_1/Pos0})}
|
|
\label{fig:sequence_gghl1}
|
|
\end{figure}
|
|
|
|
\subsection{computing the circularness image}
|
|
|
|
\label{sec:circularness}
|
|
|
|
The circularness of a pixel is defined by its likeliness of being the center of a circular symmetry. When we compute this circularness for each pixel, then we obtain what we call a circularness image.
|
|
|
|
The idea is that for each pixel $p$ we compute, treating this pixel to be a center of circular symmetry :
|
|
\begin{itemize}
|
|
\item the radial profile $g(p,r)$ (where $r$ is the radius from the center of the pixel) of the pixel values in the neighborhood of this pixel.
|
|
\item the radial profile standard deviation $\sigma(p,r)$ in the same neighborhood ($\sigma(p,r)$ is the standard deviation of the image on the circle centered on $p$ and of radius $r$)
|
|
\end{itemize}
|
|
|
|
Let $s_p$ being the contrast of the radial profile, computed as the standard deviation of the radial profile $g(p, r)$, and $a(p)$ the angular noise, computed as the standard deviation of $\sigma(p,r)$. Center of circular particles can then be detected at pixels where:
|
|
\begin{itemize}
|
|
|
|
\item the radial profile standard deviation $a(p)$ is small (which means there is a strong circular symmetry with this pixel as as center)
|
|
\item and the radial profile exhibits a strong peak (to not consider uniform areas as proper center of circular symmetry). The position of the peak gives the particle radius. $s(p)$ can be used as an estimator of the peak strongness.
|
|
\end{itemize}
|
|
|
|
The circularness $c(p)$ of pixel $p$ is computed as follow :
|
|
\begin{equation}
|
|
c(p) = w_s.s(p) - w_a.a(p)
|
|
\end{equation}
|
|
where $w_s$ and $w_a$ act as weighting parameters to give more importance to strong peaks or little noise along circles.
|
|
|
|
\subsubsection{Efficient computation of \texorpdfstring{$g(p,r)$}{g(p,r)} and \texorpdfstring{$\sigma(p,r)$}{sigma(p,r)}}
|
|
|
|
$g(p,r)$ and $\sigma(p,r)$ can be computed efficiently using convolution techniques. For this, the disc area around pixels is discretised into $n_r \times n_a$ areas, $n_r$ being the number of radial zones (the radial range $\lbrack 0;R \rbrack$ around pixel $p$ is split into $n_r$ equal size zones), and $n_a$ the number of angular zones (the angular range $\lbrack 0;2\pi \rbrack$ is split int $n_a$ angular sectors).
|
|
|
|
\subsection{ImageJ's maxima finder}
|
|
|
|
\label{sec:max_finder}
|
|
|
|
% https://github.com/imagej/imagej1/blob/master/ij/plugin/filter/MaximumFinder.java#L636
|
|
To understand the meaning of threshold and tolerance parameters, I had to look at the source code \footnote{\url{https://github.com/imagej/imagej1/blob/master/ij/plugin/filter/MaximumFinder.java\#L636} }
|
|
\begin{enumerate}
|
|
\item The algorithm first builds the list of maxima candidates. Only local maxima above this threshold are considered as valid maxima candidates
|
|
\item Then each maximum candidate is processed to find its neighborhood area, starting from the highest. For ecach candidate, a propagation is performed on neighbours as long as the neighbour value is within the tolerance of the maximum candidate. If another candidate is encountered as a neighbour, then this candidate is removed as it('s been absorbed by the current maximum candidate)
|
|
\end{enumerate}
|
|
|
|
\begin{description}
|
|
\item[threshold] local maxima below this value are ignored
|
|
\item[tolerance] ignore local maxima if they are in the neighborhood of another maximum. The neighborhood of a maximum is defined by the area that propagates until the the difference between the pixel value and the value of the considered local maximum exceeds tolerance. The higher the tolerance, the more local maxima are removed.
|
|
\end{description}
|
|
|
|
\end{document} |