info@biomedres.us   +1 (502) 904-2126   One Westbrook Corporate Center, Suite 300, Westchester, IL 60154, USA   Site Map
ISSN: 2574 -1241

Impact Factor : 0.548

  Submit Manuscript

Review ArticleOpen Access

Landscape of Biomedical Image Analysis in Management of Diseases– A Clinical Headway Volume 47- Issue 1

Umme Abiha1,2, Neha Goel3, Parul Chugh4 and Mohit Kumar Sharma5*

  • 1Indian Institute of Technology, IDRP, Smart Healthcare, Jodhpur, India
  • 2All India Institute of Medical Science, Jodhpur, India
  • 3Forest Research Institute, Dehradun, India
  • 4Amity Institute of Biotechnology, Amity University, Noida, India
  • 5School of Molecular Medicine, Medical University of Warsaw - ul. Żwirki i Wigury 61, Warsaw, Poland

Received: October 21, 2022;   Published: November 02, 2022

*Corresponding author: Mohit Kumar Sharma, School of Molecular Medicine, Medical University of Warsaw - ul. Żwirki i Wigury 61, Warsaw, Poland

DOI: 10.26717/BJSTR.2022.47.007430

Abstract PDF

ABSTRACT

The traditional use of physiological instruments to estimate information about the prosperity of the human body was achieved by unequivocal spotlights like blood glucose, neural conduction, circulatory strain, and many more. These essentially provided insights about a patient’s body, thereby aiding medical practitioners in clinical evaluations. However, they are insufficient in precision and accuracy at initial screening and early diagnosis. This lead to the wide spectrum of availability of AI tools and software, that promote and support the field of biomedical applications to unearth the answers to the diversity of biological queries. This integrated approach of computer-aided diagnosis has revolutionized the system of diagnosis and screening; Therefore, this review is an expatiate documentation on the fundamental process of image analysis, availability of AI tools rendered to interpret diseases and formulate treatment options. Moreover, it also covers the modalities in image analysis and the development of CAD as a potential diagnostic solution.

Keywords: Biomedical imaging; big data; Deep learning; Artificial image analysis; MATLAB; ImageJ; Fiji; CellProfiler; Malaria; Cancer; skin ailments; Computer-aided diagnosis

Abbreviations: UBIAS: Understanding Based Image Analysis Systems; AI: Artificial Intelligence; AIA: Automated Image Analysis; HCS: High-Content Screening; GUI: Graphical User Interface; CCD: Charge Coupled Device; MDC: Multispectral Digital Colposcope; LEEP: Loop Electrosurgical Excision Procedure; TIF: Tagged Image Format; BMP: Bitmap Image File; DNN - Deep Neural Network; CNN: Central Neural Network; ANN: Artificial Neural Network; RGB: Red Green Blue; ROC: Receiver Operating Characteristic Curve

Introduction

The traditional means and application system of automatic data analysis has transformed into a more integrated approach of cognitive systems. Along with globalization of AI – tools and software, an innovative approach dedicated to the interpretation and semantic analysis of data in medical image analysis was a class of cognitive systems– the UBIAS (Understanding Based Image Analysis Systems). Automated image analysis based on AIA algorithms, uses finely tuned software to extract data from digital images. It recognizes specific shapes and patterns in the images and gathers quantitative information that is then used for further data analysis. AIA is heavily used in screening large amounts of image data generated in high-content screening (HCS) experiments for drug discovery and phenotypic screening biological research [1-3]. In HCS, changes in cell morphology as a consequence to exposure to chemicals or RNAi reagents are detected using image analysis to elucidate the workings of normal and diseased cells. Mostly, the term (high content) imaging is used for automated microscopes that image cells, tissues, or small organisms using fluorescent and transmitted light [4]. Fluorescent tags attached to different cellular macromolecules help visualize cells using automated microscopy [5,6]. AIA is highly desirable in pharmaceutical and biological research industries so that a multitude of samples can be analysed in a small amount of time. The visualization of previously inaccessible realms in biology has only been possible due to the coupling between modern computers and microscopy technology [7].

Figure 1 describes the Biomedical analysis system. Various software packages, mostly provided by the vendors, can acquire, analyse, and manage high content images and offer advanced data analysis systems. In fact, improved GUIs in the latest versions of these software packages help in real world workflows. In order to successfully implement the HCS approach, specialized tools for data archiving, visualization and mining play a very important role [8]. Image analysis usually involves several steps for measuring various features within the image. After image acquisition the first step is background correction, identification of objects from background, separation of individual objects by segmentation and finally features extraction from selected objects [9]. Various steps are discussed below.

Figure 1 Flow chart of a Biomedical analysis system [9].

biomedres-openaccess-journal-bjstr

Fundamental Process of Biomedical Image Analysis

Image analysis usually involves several steps for measuring various features within the image. After image acquisition the first step is background correction, identification of objects from background, separation of individual objects by segmentation and finally feature extraction from selected objects [4]. Various steps are discussed below:

Image Acquisition and Storage:The images are usually captured in the 12-bit range (so that images have 4096 shades of intensity) and stored in a lossless format like TIF or BMP for storage.

However, images can also be acquired and stored in image formats supporting more (14, 16, 24 or 48) bits. To improve handling, images containing multiple color channels, frames of a time lapse, Z-stacks of images, or montages of images can also be grouped in one image format (e.g. TIF format).

Image Processing: Before features can be extracted from an image, it requires to be cleaned of any systematic (e.g. uneven field illumination, optical aberrations, focus failures, etc.) or random errors (variations in the number of observed photons stimulating a dye molecule, number of dye molecules, electrons emitted per stimulation, etc.) using methods like “rolling ball” methods [4]. This helps in distinguishing the real signal from the background.

Image Segmentation (or Object identification): Image segmentation is the process of determining meaningful regions or objects in an image. It involves determination of boundaries of some specific targets or prospects in the image (that have some specific properties) to differentiate these objects from the background using automatic or semi-automatic approaches. The accuracy of this step directly affects the accuracy of the rest of the analysis. Various algorithms have been used for image segmentation, handling border objects and to define secondary or tertiary objects. The simplest method for image segmentation is thresholding however more sophisticated algorithms generally used include Otsu thresholding [10, 11], Watershed [12], and GrabCut [13].

Feature Extraction: Usually, ‘feature extraction’ means measuring number, size, shape, intensity, texture, kinetic measurements, etc. of a cell or object from an image. The specifics for each feature may vary between manufacturers and platforms. Then, the intensity (number of photons captured) for a selected feature (single pixel, set of pixels, or an object) is determined and reported. The feature intensity is reported as “Total” and “Mean” intensity units. These measurements can also help in quality control assessment for example, “nuclear area” and “nuclear intensity” can help in identifying apoptotic, necrotic or dividing cells as each of these types of cells have a distinct characteristic nucleus. The coordinate (XYZ position) measurements of various features (cells) can help in predicting infections, autophagy and other phenomena. Regions or compartments within or around a cell can be distinguished using secondary object algorithms. Subregions (like lipid rafts, ribosomes, micronuclei, mitochondria, actin or tubulin strands) can also be identified as puncta (or spots) and measured in a similar manner. In fact, this quantitative information on different features from every cell in the imaged samples leads to a lot of multi-dimensional (multiple parameters) and hierarchical data which is impossible to be analysed further manually. This means that image-based analysis is inherently multiplexed and can provide a large amount of quantitative information from an image [14]. Therefore, highly advanced software packages have been developed for automated image analysis to obtain reliable results in much shorter time periods. In fact, the manufacturers of various image acquisition instruments provide their own dedicated image processing software. However, compared to various software available under free licences, these commercial programs may not be as flexible to allow more complex image manipulations [15]. Below we describe a few of the most popular ones.

AI tools and platforms

Even though it is difficult to compare the expertise of a trained biologist with a software yet working with many samples manually is time-consuming, subjective, and non-quantitative [16]. Therefore, image analysis by cell segmentation and feature extraction using software like CellProfiler [14], Matlab, Labview or ImageJ [15] have become well-established steps for more than a decade. However, the analysis and interpretation of multi-parametric cellular descriptors is a more challenging task. It requires powerful statistical and machine learning methods and can be facilitated by the possibility of producing visualizations of intermediate results, by the automation of complex workflows such as cross-validation or parameter searches, and by easy access to biological metadata and genomic databases. Therefore, an acute rise was observed in the implementation of artificial intelligence solutions across various industries. This rise in AI solutions has only been possible due to the availability of increased computational power and availability of training datasets like ImageNet [17] and MNIST (modified NIST) [18]. The availability of a large amount of input data for training has led to an increasingly accurate AI based solutions. Nonetheless, it requires image processing, manipulation, and finally image labeling. The results from image analysis are usually presented as a figure with statistical information. Both, freely available and commercial software, are available for quick analytics. Some of the popular examples include R language (http://cran.r-project.org/), CellProfiler (http://ww.cellprofiler.org/), MATLAB (http://www. mathworks.com/products/matlab/),andTIBCOSpotfire(https:// www.tibco.com/products/tibco-spotfire). Other software that have been implemented for AIA include Image-Pro® Plus (Media Cybernetics, Silver Spring, MD), Metamorph® (Universal Imaging, Downingtown, PA), and Visilog (Noesis, France). These have been available for small-scale human-interactive research imaging fluorescence microscopy for many years. These software products helped establish image processing and analysis as a valuable tool for cell and molecular biologists. The open architecture allows the incorporation of powerful software tools available from other vendors such as Spotfire (Somerville, MA) DecisionSite®, IDBS (Guildford, UK) Activity Base, and MDL® ISIS (MDL Information Systems, San Leandro, CA). CellSpace Knowledge Miner.

MATLAB®

MATLAB® is a programming language used by engineers and to analyze and create systems and products that make a difference [19,20]. At the core of the program is the MATLAB language, a matrix-based language that provides for the most intuitive representation of computer mathematics. Scientists and engineers in the biotech and pharmaceutical industries use MATLAB® and Simulink® for cross-disciplinary data analysis and end-to-end operations.

Scientists and engineers may use MATLAB to:

Hybridize data from various data sources, such as signal, picture, text, and genetic

Process engineering to optimize pharmaceutical manufacturing

Modeling and simulation for drug research and development

Design, create, and deploy code to control new medical equipment

Generate automated output reports in Adobe Acrobat, Microsoft Word, and PowerPoint.

Additionally, MATLAB® and Simulink® allow engineers to accelerate the development of medical device software and hardware by integrating and automating the many steps of design, implementation, and verification [21].

Develop and test complex algorithms and whole systems prior to implementation

Use static analysis to detect software defects and establish accuracy of your models and code

Prototype designs and generates proof-of-concepts by automatically producing real-time code.

ImageJ

ImageJ is the world’s fastest pure Java-based open-source image processing program capable of reading most of the image formats used in biomedical research [22]. It can perform a wide variety of tasks ranging from common image operations (e.g. convolution, edge detection, Fourier transform, histogram and particle analyses) to advanced operations on individual pixels, image regions, whole images and volumes (like dilation, erosion and closing of structures, and other mathematical operations on image sets). It is a truly versatile program that, being open source, incorporates more than 500 user-written plugins (like NeuronJ, VolumeJ and NucleusJ [22-24] and macros developed by a strong user community (more than 1700 users/developers). ImageJ supports automated image segmentation, iterative deconvolution, co-localization analysis, time course processing and other analyses using various plugins for a single file or batch processing (using macros). Indeed, ImageJ has become an invaluable tool for image processing and analysis for microscopy labs and facilities alike [22]. There are numerous variants of ImageJ and NIH Image (ImageJ precursor) available these days [7]. ImageJ2 is the next generation version of ImageJ that provides a host of new functionality mostly to support the growing sophistication and complexity in image acquisition [25]. ImageJ’s functionality is further enhanced by a collection of mathematical morphology methods (for 3D image analysis) and plugins available as MopholibJ library [26]. MorphoLibJ can be used to obtain features like maximal enclosed ball and geodesic diameter along with quantified spatial organisations and neighbourhood relationships between labelled regions.

Fiji

Fiji is a biological-image analysis-focused distribution of the popular open-source software ImageJ. To facilitate quick development of image-processing algorithms, Fiji employs current software engineering principles to integrate strong software libraries with a diverse set of scripting languages [27]. Fiji makes it easy to convert novel algorithms into ImageJ plugins that can be shared with end users via an integrated update mechanism. Fiji’s core algorithms may be used in a variety of scripting languages common to bioinformaticians, making it easier to prototype novel bioimage solutions. Furthermore, Fiji has a strong distribution mechanism that guarantees new algorithms reach its large user base as quickly as feasible, kicking off an iterative refining process based on communication between engineers and users [28,29]. In essence, Fiji is intended to be a software-engineering system in which the computer science and biology research communities may collaborate to develop algorithms into useable applications for answering biological research issues.

Cell Profiler

CellProfiler was the first free open-source software package capable of handling and analysing thousands (high-throughput) of cell images for both standard and complex morphological assays [14]. Its source code was written in MATLAB language. It includes several standard methods for illumination correction to address illumination variation. For cell identification in clumped cells, it uses an improved propagate algorithm to determine the borders [30]. Standard measures like area, shape, intensity, texture, saturation, and blur can be easily determined for stains, cells or subcellular compartment along with complex measurements like Zernike shape features and Haralick and Gabor texture features. It is easy to create an image analysis “pipeline” (comparable to the macros of ImageJ) for image processing function involving steps like illumination correction, object identification, and object measurement using point-and-click graphical user interface (GUI). Since version 2.0, it was ported from MATLAB to Python language and Cython compiler was implemented that allowed using ImageJ within a CellProfiler pipeline [31]. CellProfiler can also run headless (without calling GUI) on Linux using command line switches to specify input, output and other parameters for analysis. A companion software, CellProfiler Analyst [32], can learn to identify challenging phenotypes of our interest using machine learning algorithms that work further on results from CellProfiler [33-35]. The third release (v3.0) of CellProfiler can run on the cloud structure, analyse (high-throughput) 3D images, and use Convolutional neural networks for deep learning [36]. Distributed- CellProfiler is a series of scripts that help run CellProfiler on a cluster environment [37].

R language for AIA

By combining powerful statistical and machine learning methods, R language can be used for signal processing, statistical modeling, machine learning and data visualization [38]. For example, to perform advanced image processing and analysis using R language, the magick package can be used [39]. It wraps the ImageMagick ST library, one of the most comprehensive opensource image processing library available today, that can work with more than 200 different image formats. The image processing power of magick has been implemented in R packages to segment cells and extract quantitative cellular descriptors. Some R/ Bioconductor packages are described below in the context of AIA:

RImageJ: The RImageJ package for image analysis was developed by Romain Francois & Philippe Grosjean as an integration of ImageJ and R through rJava (an R-to-Java interface) [40]. Additionally, Paul Murrell’s code allowed using images as raster objects under the RImageJ package. Though, in the absence of further development, it has been removed from the CRAN repository and archived here. Yet, it deserved a mention here to indicate that there can be several ways of implementing R language for advanced image analyses.

EBImage: EBImage package can be used to extract quantitative cellular descriptors [38]. Signal processing, statistical modeling, machine learning and visualization with image data are possible with EBImage in combination to other R based tools.

ImageHTS: imageHTS is a package designed to analyse highthroughput microscopy-based screens and to operate in distributed environments [41]. It can segment cell images, extract quantitative cell features, predict cell types and browse screen or remote data through web interfaces.

Cellprofile-r: CellProfileR package acts as an R language interface to the CellProfiler and CellProfiler Analyst databases to easily quantify phenotypes from thousands of images automatically. A number of convenience functions and workflows are available for sophisticated downstream analyses. It can be obtained from https://github.com/afolarin/cellprofile-r.

Modalities in Image Analysis

Research in the area of developing technical tools and solutions in order to strengthen the traditionally available medical modalities has been successful to quite some extent. However, the available solutions are insufficient for various radiological applications, primarily because the biomedical image analysis lacks consistency due to different image processing techniques with variable and complex nature of resolution [42, 43]. The revolutionary research with the discovery of X-rays by W.C. Roentgen in 1895 [44], laid the foundation of Biomedical imaging. This was influential in robust development of computed Tomography scanners in 1970 [45], and thus computers integrated the field of medical science and clinical practice. The different medical modalities and their characteristics have been mentioned comparatively in Table 1.

Table 1: Comparative analysis of different Biomedical imaging modalities [9, 46].

biomedres-openaccess-journal-bjstr

CAD – A Pathological Solution

The fundamental scope in all disciplines of biomedical applications is an integral part of image acquisition and analysis. In order to begin with any task of screening, diagnosis, drug development, molecular level studies and even developing personalized treatments, etc., the visual aspect of information is of absolute importance [46,47]. Accounting to the various image modalities that are offered to the scientists or specialists, the time and effort demanded is appalling. This further leads to underutilization of available visual information. Therefore, computer – aided solutions provide an edge in the process of image analysis. In light of research over the years, along with the availability of massive data, new computer – based solutions have revolutionized the experience of data interpretation which mainly include, Big Data and Deep learning [48, 49]. The later has dramatically advanced the analysis of images and videos to a potential of transformed computer-aided diagnosis. Moreover, extensive research and development in neural algorithms and softwares with the availability of large annotated biomedical imaging datasets, have improved segmentation of molecules, cells, lesions, nodules, tumors, organs and other structures of interest. There are several advantages and disadvantages of AIA. In terms of advantage, the first one is that compared to manual image analysis, the implementation of computational resources helps us analyse large amounts of image data at a great speed. One of the concerns regarding manual image analysis is subjectivity in the results [50]. Subjectivity of results is dependent on human errors (due to tiredness or distraction) or differences in opinion during the data collection process. Another advantage is that computers work tirelessly, overnight or even on the weekends, with a consistent performance over time [51].

The setup optimization is performed only once and reliable results can be generated indefinitely from the algorithms. On the other hand, a naive user may face certain difficulties or disadvantages. These disadvantages include commercial software being expensive to buy, can be brand specific, require a certain time for training and optimization to obtain accurate results in image analysis. CAD has lead to various advancements in the field of Biomedical science and proved to be a great pathological solution to study different diseases which have been discussed in the next section.

Computer – Aided Advancements in Biomedical Imaging

Malaria

Malaria is a fatal disease that is a leading source of infection across the world. Because of its high fatality rate, this epidemic disease has been documented throughout history. Malaria infections are increasing at an exponential rate, due to a variety of factors including a shortage of highly qualified experts in rural regions, data mismanagement, the widespread use of bogus and duplicate medications, the availability of low-cost diagnostic technologies, global warming, and more [52,53]. This communicable disease is a complicated, fast spreading infection that has become difficult to control due to the high number of malaria parasites. Malaria diagnosis is challenging, and the high density of blood smear microscopic pictures makes it impossible to distinguish between parasite and non-parasite infected patients [54]. The secret to obtaining an accurate result of infected parasite identification with the least amount of time, money, and effort is a challenge for research experts. Visual inspection has evolved as a revolutionary assistive software strategy in clinical medical imaging and decision aid in the previous several decades in the computer-aided diagnostic sector. Visual inspection of this worldwide disease, on the other hand, is subjective, time-consuming, and inaccurate. One of the well-known fundamental issues in separating stained blood smear microscopic picture components is the visual technique for defining and recognizing malaria parasites [55]. The typical method of identifying malaria in a clinic is a laborious and timeconsuming task with a low likelihood of yielding an accurate result. In light of the aforementioned significant problems, computeraided diagnosis (CAD) has opened the way for more objective evaluation for individualized healthcare and diagnostics tasks. With effective time series management, the development of CAD has successfully bridged the gap between discriminative local appearances and the global picture context [56]. By annotating imaging data sets and identifying abnormalities under a variety of environmental conditions, CAD has had a significant influence on all patients and imaging modalities. CAD has challenges such as changes in size, shape, and intensity changes in imaging protocols of cell components in blood smear microscopic images.

Several artificial neural networks with numerous layers have been developed in recent years for a variety of health diagnosis utilizing microscopic pictures, but Razzak et al. introduced a strong model called Deep learning, which revived the 1989 concept. Deep learning is a branch of machine learning that use hierarchical learning deep picture architecture to learn high-level characteristics from pixel intensities [57]. The deep neural network (DNN) architecture has many hidden layers, which is why the network is named deep. It may be used for both classification and regression applications. DNN is a new way to achieving outstanding results in a variety of applications, including dimensionality reduction, object segmentation, modeling textures, modeling motion, information retrieval, robotics, natural language processing, and collaborative filtering, among others. The total CNN model analysis and rigorous empirical assessment aid in the development of a high-performance CAD model for medical image tasks with good accuracy [58,59]. Many CAD investigations focused on the prognosis of malaria parasites and the direct distinction of parasites from non-parasites. For constructing an automated diagnostic system for malaria diagnosis, Das et al. employed SVM and Naive Bayes machine learning classifiers to obtain accuracies of 84 percent and 83.5 percent, respectively [60]. Ross et al. developed an 85 percent accurate three-layer neural network as a classifier for automated malaria detection on thin blood smears [61].

Cervical Cancer

It is the second most common cancer in women with a higher incidence and mortality rate, which makes it a plausible concern for cancer management and diagnosis [62]. The development of a segmentation algorithm, based on statistical optimization clustering system to detect acetowhite epithelium regions, [63] which was then proceeded in the detection of acetowhite regions on the base of chromaticity with watershed algorithm [64]. This fueled the research and further Huang et al. detected the same region with color and brightness feature estimated in system Lab and HSV [65]. Gordon and coworkers, at Tel-Aviv University, developed a segmentation algorithm for three tissue types in cervical imagery (original squamous, columnar, and acetowhite epithelium) based on color and texture information [66]. The innovation of colposcopy has proved to be a promising technology in detecting cervical intraepithelial neoplasia. Together, combining CAD and colposcopy technology would create an automatic image diagnosis by detecting neoplasia tissue which would improve precision, accuracy and also reduce time consumption and efforts. Therefore, recent advancements in consumer electronics have led to inexpensive, high-dynamic-range charge-coupled device (CCD) cameras with excellent low light sensitivity. At the same time, advances in vision chip technology enable high-quality image processing in real time. These advances may enable the acquisition of diagnostically useful digital images of the cervix in a relatively inexpensive way, with or without magnification. Moreover, automated analysis algorithms based on modern image processing techniques have the potential to replace clinical expertise, which may reduce the cost of screening.

The purpose of this study is to explore whether digital colposcopy, combined with recent advances in camera technology and automated image processing, could provide an inexpensive alternative to Pap screening and conventional colposcopy. Park, Follen, Rhodes, conducted a pilot study on MDC (multispectral digital colposcope) to acquire reflectance images of the entire cervix with white light illumination [67]. They employed the approach of AIA including image registration, pattern recognition, clustering, and classification. Moreover, they developed an algorithm that had a potential of identifying high-grade precancerous tissue areas from an entire image. They even constructed a gold standard for the entire cervical image using a whole cervix specimen acquired from a loop electrosurgical excision procedure (LEEP), which was intensively sectioned [68]. Figure 2 depicts the AIA approach employed to screen cervical neoplasia in patient samples.

Figure 2 AIA approach for screening cervical neoplasia [67].

biomedres-openaccess-journal-bjstr

Skin Diseases

Since the 1990s, skin condition identification and categorization has been a preferred study topic. The majority of research in the literature is focused on skin cancer categorization, with just a small amount of effort focused on other disorders [69]. Researchers focus on multivariate clinically derived characteristics rather than photographs as a result of this constrained effort. Others that focus on image-based skin lesion categorization only look at a single disease. Guvenir and Emeksiz describe an expert system for the differential diagnosis of erythemato-squamous disease that incorporates three classifiers and achieves 99.2 percent accuracy using the voting feature interval-5 algorithm [70]. Ubeyli et al. developed a 97.7% accurate mixed neural network strategy for the diagnosis of the same disorders [71]. Chang et al. performed similar research, using a decision tree and neural network to categorize erythemato-squamous disease with a predicted accuracy of 92.62 percent [72]. Using machine learning techniques, Xie et al. categorized erythemato-squamous disease with accuracy of 98.61 percent [73]. In addition, Nugroho et al. developed a digital image analysis approach for identifying vitiligo in skin images [74]. The photos were pre-processed in the created system by using a lowpass Gaussian filter to reduce specular reflection distortions. Because the suggested approach was only tested on 41 RGB photos, it lacks adaptability and generalizability. Only eczema is categorized as «mild» or «severe» based on extracted characteristics in Alam et al. study, with an accuracy of 93 percent for healthy images and 92 percent for classified images in the first stage, and 80 percent for mild eczema and 93 percent for severe eczema in the second stage [75].

Guzman et al. also created and assessed a multi-layered system, and ANN was utilized to construct the single layered and multi-layered systems for eczema detection [76]. The single layer approach distinguishes between eczema and non-eczema images, whereas the multi-model method distinguishes between three forms of eczema: spotted, scattered, and dry eczema [77]. The extracted characteristics were subjected to ANN, which resulted in accuracy of 85.71 percent to 96.03 percent for the single-layered system and 87.30 percent to 92.46 percent for the multilayered system [78,79].

Perspectives and Conclusion

Biomedical image analysis is an interdisciplinary discipline that involves applying image processing techniques to biological or medical challenges. Medical images to be analysed include a wealth of information about the anatomical structure under examination, allowing clinicians to make accurate diagnoses and hence pick appropriate treatment. These medical images are frequently manually analysed by doctors using visual interpretation. However, owing to differences in interpersonal interpretations, fatigue mistakes, and ambient disruptions, visual analysis of these images by human observers is restricted, and this type of analysis is essentially subjective. Automated analysis of these images utilizing computers with appropriate techniques, on the other hand, favours objective analysis by an expert, enhancing diagnostic confidence and accuracy of analysis. In the medical imaging sector, computerassisted analysis for better image interpretation has been a longstanding challenge. On the image-understanding front, recent breakthroughs in machine learning, particularly deep learning, have made significant progress in assisting in the identification, classification, and quantification of patterns in medical images. The gains are based on using hierarchical feature representations learnt directly from data rather than handmade features largely generated based on domain-specific expertise.

Deep learning advances have thrown fresh light on medical image analysis, allowing for the discovery of morphological patterns in images entirely based on data. Because deep learning methods have reached state-of-the-art performance in a variety of medical applications, their use for further development might be a huge step forward in the field of medical computing.

Nevertheless, there is still opportunity for advancement. In the same way that breakthrough improvements in computer vision were achieved by utilizing large amounts of training data, such as more than 1 million annotated images in ImageNet, it would be one direction to create such a large publicly available dataset of medical images, by which deep models can find more generalized features in medical images, allowing for a performance leap. Also, while the data-driven feature representations, particularly in an unsupervised setting, helped improve accuracy, it is also important to create a new methodological architecture that allows domainspecific information to be reflected or included.

References

  1. Lasse E, Wolfgang L, James BL (2010) Imaged-based High-Throughput Screening for Anti-Angiogenic Drug Discovery. Current Pharmaceutical Design 16(35): 3958-3963.
  2. Yarrow JC, Feng Y, Perlman ZE, Kirchhausen T, Mitchison TJ, et al. (2003) Phenotypic Screening of Small Molecule Libraries by High Throughput Cell Imaging. Combinatorial Chemistry & High Throughput Screening 6(4): 279-286.
  3. Mitchison TJ (2005) Small-Molecule Screening and Profiling by Using Automated Microscopy. ChemBioChem 6(1): 33-39.
  4. William Buchser MC, Tina Garyantes, Rajarshi Guha, Steven Haney, Vance Lemmon, et al. (2012) Assay Development Guidelines for Image-Based High Content Screening, High Content Analysis and High Content Imaging. In.
  5. Sittampalam GS CN, Brimacombe K (2012) Assay Guidance Manual [Internet]. Bethesda (MD): Eli Lilly & Company and the National Center for Advancing Translational Sciences 2012.
  6. Eggert US, Mitchison TJ (2006) Small molecule screening by imaging. Current Opinion in Chemical Biology 10(3): 232-237.
  7. Abraham VC, Taylor DL, Haskins JR (2004) High content screening applied to large-scale cell biology. Trends in Biotechnology 22(1): 15-22.
  8. Schneider CA, Rasband WS, Eliceiri KW (2012) NIH Image to ImageJ: 25 years of image analysis. Nature Methods 9: 671.
  9. Giuliano KA, Haskins JR, Taylor DL (2003) Advances in High Content Screening for Drug Discovery. ASSAY and Drug Development Technologies 1(4): 565-577.
  10. Renukalatha S, Suresh KV (2018) A review on biomedical image analysis. Biomed Eng (Singapore) 30(04): 1830001.
  11. Otsu N (1979) A Threshold Selection Method from Gray-Level Histograms. IEEE Transactions on Systems, Man, and Cybernetics 9(1): 62-66.
  12. Sha C, Hou J, Cui H (2016) A robust 2D Otsu’s thresholding method in image segmentation. Journal of Visual Communication and Image Representation 41: 339-351.
  13. Beucher S (1992) The watershed transformation applied to image segmentation Pp: 299-314.
  14. He Y, Sun Y (2015) An automatic image segmentation algorithm based on GrabCut. 6th International Conference on Wireless, Mobile and Multi-Media (ICWMMN 2015) 20-23.
  15. Carpenter AE, Jones TR, Lamprecht MR, Clarke C, Kang IH, et al. (2006) CellProfiler: image analysis software for identifying and quantifying cell phenotypes. Genome Biology 7(10): R100.
  16. Abramoff MD, Magalhães PJ, Ram SJ (2004) Image processing with ImageJ. Biophotonics International 11(7): 36-42.
  17. Lamprecht MR, Sabatini DM, Carpenter AE (2007) CellProfiler: free, versatile software for automated biological image analysis. BioTechniques 42(1): 71-75.
  18. Deng J, Dong W, Socher R, Li L, Kai L, e al. (2009) ImageNet: A large-scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition 20-25.
  19. Lecun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11): 2278-2324.
  20. Demirkaya O, Asyali MH, Sahoo PK (2008) Image processing with MATLAB: applications in medicine and biology. CRC Press.
  21. Schmidt H, Jirstrand M (2006) Systems Biology Toolbox for MATLAB: a computational platform for research in systems biology. Bioinformatics 22(4): 514-515.
  22. Allen LJ (2010) An introduction to stochastic processes with applications to biology. CRC press.
  23. Collins TJ (2007) ImageJ for microscopy. BioTechniques 43(1 Suppl): 25-30.
  24. Meijering E, Jacob M, Sarria J-CF, Steiner P, Hirling H, et al. (2004) Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry Part A 58A(2): 167-176.
  25. Probst AV, Poulet A, Tatout C, Legland D, Arganda-Carreras I, et al. (2014) NucleusJ: an ImageJ plugin for quantifying 3D images of interphase nuclei. Bioinformatics 31(7): 1144-1146.
  26. Rueden CT, Schindelin J, Hiner MC, DeZonia BE, Walter AE, et al. (2017) ImageJ2: ImageJ for the next generation of scientific image data. BMC Bioinformatics 18(1): 529.
  27. Legland D, Arganda-Carreras I, Andrey P (2016) MorphoLibJ: integrated library and plugins for mathematical morphology with ImageJ. Bioinformatics 32(22): 3532-3534.
  28. Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, et al. (2012) Fiji: an open-source platform for biological-image analysis. Nature methods 9(7): 676-682.
  29. Usaj MM, Styles EB, Verster AJ, Friesen H, Boone C, et al. (2016) High-content screening for quantitative cell biology. Trends in cell biology 26(8): 598-611.
  30. Sych T, Schubert T, Vauchelles R, Madl J, Omidvar R, et al. (2009) GUV-AP multifunctional FIJI-based tool for quantitative image analysis of Giant Unilamellar Vesicles. Bioinformatics 35(13): 2340-2342.
  31. Jones TR, Carpenter A, Golland P (2005) Voronoi-Based Segmentation of Cells on Image Manifolds. Computer Vision for Biomedical Image Applications. Berlin, Heidelberg: Springer Berlin Heidelberg.
  32. Kamentsky L, Jones TR, Fraser A, Bray M-A, Logan DJ, et al. (2011) Improved structure, function and compatibility for CellProfiler: modular high-throughput image analysis software. Bioinformatics 27(8): 1179-1180.
  33. Jones TR, Kang IH, Wheeler DB, Lindquist RA, Papallo A, et al. (2008) CellProfiler Analyst: data exploration and analysis software for complex image-based screens. BMC Bioinformatics 9(1): 482.
  34. Jones TR, Carpenter AE, Lamprecht MR, Moffat J, Silver SJ, et al. (2009) Scoring diverse cellular morphologies in image-based screens with iterative feedback and machine learning. Proceedings of the National Academy of Sciences 106(6): 1826-1831.
  35. Misselwitz B, Strittmatter G, Periaswamy B, Schlumberger MC, Rout S, et al. (2010) Enhanced CellClassifier: a multi-class classification tool for microscopy images. BMC Bioinformatics 11(1): 30.
  36. Snijder B, Begemann B, Pelkmans L, Rämö P, Sacher R, et al. (2009) CellClassifier: supervised learning of cellular phenotypes. Bioinformatics 25(22): 3028-3030.
  37. McQuin C, Goodman A, Chernyshev V, Kamentsky L, Cimini BA, et al. (2018) CellProfiler 3.0: Next-generation image processing for biology. PLOS Biology 16(7): e2005970.
  38. Bray M-A, Vokes MS, Carpenter AE (2015) Using CellProfiler for Automatic Identification and Measurement of Biological Objects in Images. Current Protocols in Molecular Biology 109(1): 14.7.1-.7.3.
  39. Fuchs F, Pau G, Boutros M, Sklyar O, Huber W, et al. (2010) EBImage-an R package for image processing with applications to cellular phenotypes. Bioinformatics 26(7): 979-981.
  40. Ooms J (2018) The magick package: Advanced Image-Processing in R. 2018.
  41. Francois R, Grosjean P (2009) RImageJ: R bindings for ImageJ.
  42. Pau G, Zhang X, Boutros M, Huber W (2019) image HTS: Analysis of high-throughput microscopy-based screens. R package version 1.32.1. 2019.
  43. Rangayyan RM (2004) Biomedical Image Analysis. (1st)., Boca Raton, FL: CRC Press.
  44. Haidekker MA (2010) Advanced Biomedical Image Analysis: Haidekker/biomedical image. Hoboken, NJ: Wiley-Blackwell.
  45. Faiz U (2013) Comparison between medical imaging modalities, Computing in Medical Physics, Pakistan Institute of Engineering & Applied Sciences, MS-11-JK-10107.
  46. Halmshaw R (1996) The early history of the discovery of x-rays, gamma-rays and industrial radiography. In: Trends in NDE science and technology: proceedings of the fourteenth world conference on NDT V 1.
  47. Johannesen TB, Langmark F, Lote K (2003) Progress in long-term survival in adult patients with supratentorial low-grade gliomas: a population-based study of 993 patients in whom tumors were diagnosed between 1970 and 1993. J Neurosurg 99(5): 854-862.
  48. Cherry SR (2009) Multimodality imaging: Beyond PET/CT and SPECT/CT. Semin Nucl Med 39(5): 348-53.
  49. Roy M, Mali K, Chatterjee S, Chakraborty S, Debnath R, et al. (2019) study on the applications of the biomedical image encryption methods for secured computer aided diagnostics. In2019 Amity International Conference on Artificial Intelligence (AICAI) 2019 Feb 4 (pp. 881-886). IEEE.
  50. Reeves AP, Kressler BM (2004) Computer-aided diagnostics. Thoracic surgery clinics 14(1): 125-133.
  51. Schilling F, Geppert CE, Strehl J, Hartmann A, Kuerten S, et al. (2019) Digital pathology imaging and computer-aided diagnostics as a novel tool for standardization of evaluation of aganglionic megacolon (Hirschsprung disease) histopathology. Cell and tissue research 375(2): 371-381.
  52. Vashistha R, Chhabra D, Shukla P (2018) Integrated artificial intelligence approaches for disease diagnostics. Indian journal of microbiology 58(2): 252-255.
  53. Pattanaik PA, Mittal M, Khan MZ (2020) Unsupervised deep learning cad scheme for the detection of malaria in blood smear microscopic images. IEEE Access 8: 94936-46.
  54. Dave IR, Upla KP (2017) Computer aided diagnosis of malaria disease for thin and thick blood smear microscopic images. In2017 4th international conference on signal processing and integrated networks (SPIN), IEEE pp. 561-565.
  55. Wibisono Y, Nugroho AS, Galinium M (2020) Optimization on Malaria Computer Aided Diagnostic System. InProceedings of the International Conference on Engineering and Information Technology for Sustainable Industry pp. 1-6.
  56. Nag S (2018) A Method for Malaria Parasites Detection Systems. International Journal of Medical Science and Diagnosis Research 2(6).
  57. Sriporn K, Tsai CF, Tsai CE, Wang P (2020) Analyzing Malaria Disease Using Effective Deep Learning Approach. Diagnostics 10(10): 744.
  58. Razzak MI, Alhaqbani B (2015) Automatic detection of malarial parasite using microscopic blood images. Journal of Medical Imaging and Health Informatics 5(3): 591-598.
  59. Das D, Ghosh M, Chakraborty C, Maiti AK, Pal M, et al. (2011) Probabilistic prediction of malaria using morphological and textural information. In 2011 international conference on image information processing, IEEE, pp.1-6.
  60. Frean J (2008) Improving quantitation of malaria parasite burden with digital image analysis. Transactions of the Royal Society of Tropical Medicine and Hygiene 102(11): 1062-1063.
  61. Das DK, Mukherjee R, Chakraborty C (2015) Computational microscopic imaging for malaria parasite detection: a systematic review. Journal of microscopy 260(1): 1-9.
  62. Ross NE, Pritchard CJ, Rubin DM, Duse AG (2006) Automated image processing method for the diagnosis and classification of malaria on thin blood smears. Medical and Biological Engineering and Computing 44(5): 427-436.
  63. Obukhova NA, Motyko AA, Kang U, Bae S-J, Lee D-S, et al. (2017) Automated image analysis in multispectral system for cervical cancer diagnostic. In: 2017 20th Conference of Open Innovations Association (FRUCT), IEEE, pp. 345-351.
  64. S Yang, J Guo (2004) “A multi-spectral digital cervigramTM analyzer in the wavelet domain for early detection of cervical cancer”, Proceedings of SPIE on Medical Imaging pp. 1833-1844.
  65. S Gordon, G Zimmerman, H Greenspan (2004) Image segmentation of Uterine Cervix images for indexing in PACS, Symposium on Computer-Based Medical Systems, Bethesda, Maryland, June 91: 1-6.
  66. J Xiong, L Wang, J Gu (2009) Image Segmentation of the Acetowhite region in Cervix Images Based on Chromaticity. Proc. of 9 Int Conference on Information Technology an Applications in Biomedicine pp: 1-4.
  67. X Huang, J Engel (2008) Tissue Classification using Cluster Features for Lesion Detection in Digital Cervigrams. SPIE Medical Images 69141Z.1-69141Z.8.
  68. Park SY, Follen M, Milbourne A, Rhodes H, Malpica A, et al. (2008) Automated image analysis of digital colposcopy for the detection of cervical neoplasia. J Biomed Opt 13(1): 014029.
  69. Hilal Z, Rezniczek GA, Alici F, Kumpernatz A, Dogan A, et al. (2018) Loop electrosurgical excision procedure with or without intraoperative colposcopy: a randomized trial. Am J Obstet Gynecol 219(4): 377.e1-377.e7.
  70. Kia S, Setayeshi S, Shamsaei M, Kia M (2013) Computer-aided diagnosis (CAD) of the skin disease based on an intelligent classification of sonogram using neural network. Neural Computing and Applications 22(6): 1049-1062.
  71. Güvenir HA, Emeksiz N (2000) An expert system for the differential diagnosis of erythemato-squamous diseases. Expert systems with applications 18(1): 43-49.
  72. Übeyli ED (2009) Combined neural networks for diagnosis of erythemato-squamous diseases. Expert Systems with Applications 36(3): 5107-5112.
  73. Chang CL, Chen CH (2009) Applying decision tree and neural network to increase quality of dermatologic diagnosis. Expert Systems with Applications 36(2): 4035-4041.
  74. Xie J, Wang C (2011) Using support vector machines with a novel hybrid feature selection method for diagnosis of erythemato-squamous diseases. Expert Systems with Applications 38(5): 5809-5815.
  75. Nugroho H, Ahmad Fadzil MH, Shamsudin N, Hussein SH (2013) Computerised image analysis of vitiligo lesion: evaluation using manually defined lesion areas. Skin Research and Technology 19(1): e72-7.
  76. Alam MN, Munia TT, Tavakolian K, Vasefi F, MacKinnon N, et al. (2016) Automatic detection and severity measurement of eczema using image processing. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE, pp. 1365-1368.
  77. De Guzman LC, Maglaque RP, Torres VM, Zapido SP, Cordel MO, et al. (2015) Design and evaluation of a multi-model, multi-level artificial neural network for eczema skin lesion detection. In2015 3rd International conference on artificial intelligence, modelling and simulation (AIMS), IEEE, pp. 42-47.
  78. Prosperi MC, Marinho S, Simpson A, Custovic A, Buchan IE, et al. (2014) Predicting phenotypes of asthma and eczema with machine learning. BMC medical genomics 7(1): 1-0.
  79. Hurault G, Domínguez-Hüttinger E, Langan SM, Williams HC, Tanaka RJ, et al. (2020) Personalized prediction of daily eczema severity scores using a mechanistic machine learning model. Clinical & Experimental Allergy 50(11): 1258-1266.