Статьи журнала - International Journal of Image, Graphics and Signal Processing

Все статьи: 1092

Local Entropy-based Non Blind Robust Image Watermarking: Case of Medical Images

Local Entropy-based Non Blind Robust Image Watermarking: Case of Medical Images

Lamri Laouamer, Mohannad Alswailim

Статья научная

Medical image, watermarking, spatial domain, local entropy, imperceptibility, robustness

Бесплатно

Local binary pattern family descriptors for texture classification

Local binary pattern family descriptors for texture classification

E. Jebamalar Leavline, D. Asir Antony Gnana Singh, P. Maheswari

Статья научная

Texture classification is widely employed in many computer vision and pattern recognition applications. Texture classification is performed in two phases namely feature extraction and classification. Several feature extraction methods and feature descriptors have been proposed and local binary pattern (LBP) has attained much attraction due to their simplicity and ease of computation. Several variants of LBP have been proposed in literature. This paper presents a performance evaluation of LBP based feature descriptors namely LBP, uniform LBP (ULBP), LBP variance (LBPV), LBP Fourier histogram, rotated LBP (RLBP) and dominant rotation invariant LBP (DRLBP). For performance evaluation, nearest neighbor classifier is employed. The benchmark OUTEX texture database is used for performance evaluation in terms of classification accuracy and runtime.

Бесплатно

Lossless Image Compression Using A Simplified MED Algorithm with Integer Wavelet Transform

Lossless Image Compression Using A Simplified MED Algorithm with Integer Wavelet Transform

Mohamed M. Fouad, Richard M. Dansereau

Статья научная

In this paper, we propose a lossless (LS) image compression technique combining a prediction step with the integer wavelet transform. The prediction step proposed in this technique is a simplified version of the median edge detector algorithm used with JPEG-LS. First, the image is transformed using the prediction step and a difference image is obtained. The difference image goes through an integer wavelet transform and the transform coefficients are used in the lossless codeword assignment. The algorithm is simple and test results show that it yields higher compression ratios than competing techniques. Computational cost is also kept close to competing techniques.

Бесплатно

Lossy Compression Color Medical Image Using CDF Wavelet Lifting Scheme

Lossy Compression Color Medical Image Using CDF Wavelet Lifting Scheme

I.boukli hacene, M. beladghem, A.bessaid

Статья научная

As the coming era is that of digitized medical information, an important challenge to deal with is the storage and transmission requirements of enormous data, including color medical images. Compression is one of the indispensable techniques to solve this problem. In this work, we propose an algorithm for color medical image compression based on a biorthogonal wavelet transform CDF 9/7 coupled with SPIHT coding algorithm, of which we applied the lifting structure to improve the drawbacks of wavelet transform. In order to enhance the compression by our algorithm, we have compared the results obtained with wavelet based filters bank. Experimental results show that the proposed algorithm is superior to traditional methods in both lossy and lossless compression for all tested color images. Our algorithm provides very important PSNR and MSSIM values for color medical images.

Бесплатно

Loudspeaker operation status monitoring system based on power line communication technology

Loudspeaker operation status monitoring system based on power line communication technology

Biyue Diao, Guoping Chen, Feng He

Статья научная

With the rapid development of science and technology, intelligent systems have been applied to various fields. A monitoring system for the operating status of loudspeakers based on power line communication was designed and implemented. In this paper, firstly analyzes the deficiencies of previous research, and then according to the actual situation, it is concluded that the power line communication technology is more suitable for loudspeaker operating status monitoring than other communication technologies. The overall design, hardware design and software design of the entire system was introduced. And in the last, the reliability of the system were proved by many experiments. This system can be used in other applications in addition to the monitoring of the operating status of the loudspeakers.

Бесплатно

Low-Light Image Enhancement Technology Based on Image Categorization, Processing and Retinex Deep Network

Low-Light Image Enhancement Technology Based on Image Categorization, Processing and Retinex Deep Network

Zhengbing Hu, Oksana Shkurat, Krzysztof Przystupa, Orest Kochan, Marharyta Ivakhnenko

Статья научная

Low-light scenes are characterized by the loss of illumination, the noise, the color distortion and serious information degradation. The low-light image enhancement is a significant part of computer vision technology. The low-light image enhancement methods aim to an image recover to a normal-light image from dark one, a noise-free image from a noisy one, a clear image from distorting one. In this paper, the low-light image enhancement technology based on Retinex-based deep network combined with the image processing-based module is proposed. The proposed technology combines the use of traditional and deep learning methodologies, designed within a simple yet efficient architectural framework that focuses on essential feature extraction. The proposed preprocessing module of low-light image enhancement is centered on the unique knowledge and features of an image. The choice of a color model and a technique of an image transformation depends on an image dynamic range to ensure high results in terms of transfer a color, detail integrity and overall visual quality. The proposed Retinex-based deep network has been trained and tested on transformed images by means of preprocessing module that leads to an effective supervised approach to low-light image enhancement and provide superior performance. The proposed preprocessing module is implemented as an independent image enhancement module in a computer system of an image analysis and as the component module in a neural network system of an image analysis. Experimental results on the low light paired dataset show that the proposed method can reduce noise and artifacts in low-light images, and can improve contrast and brightness, demonstrating its advantages. The proposed approach injects new ideas into low light image enhancement, providing practical applications in challenging low-light scenarios.

Бесплатно

Luminance-based Embedding Approach for Color Image Watermarking

Luminance-based Embedding Approach for Color Image Watermarking

Jamal Ali Hussein

Статья научная

In this paper a new non-blind luminance-based color image watermarking technique is proposed. The original 512×512 color host image is divided into 8×8 blocks, and each block is converted to YCbCr color space. A 32×32 monochrome image is used as a watermark and embedded in the selected blocks of the original image. The selected blocks must have log-average luminance that is closer to the log-average luminance of the image. DCT transform is applied to the Y component of each selected block. Each four values of the watermark image are embedded into each selected block of the host image. The watermark values are embedded in the first four AC coefficients leaving the DC value unchanged. The watermark is extracted from the watermarked image using the same selected blocks and DCT coefficients that have been used in the embedding process. This approach is tested against variety of attacks and filters: such as, highpass, lowpass, Gaussian, median, salt and peppers, and JPEG compression. The proposed approach shows a great ability to preserve the watermark against these attacks.

Бесплатно

Lung Tumor Segmentation and Staging from CT Images Using Fast and Robust Fuzzy C-Means Clustering

Lung Tumor Segmentation and Staging from CT Images Using Fast and Robust Fuzzy C-Means Clustering

Rupak Bhakta, A. B. M. Aowlad Hossain

Статья научная

Lung tumor is the result of abnormal and uncontrolled cell division and growth in lung region. Earlier detection and staging of lung tumor is of great importance to increase the survival rate of the suffered patients. In this paper, a fast and robust Fuzzy c-means clustering method is used for segmenting the tumor region from lung CT images. Morphological reconstruction process is performed prior to Fuzzy c-means clustering to achieve robustness against noises. The computational efficiency is improved through median filtering of membership partition. Tumor masks are then reconstructed using surface based and shape based filtering. Different features are extracted from the segmented tumor region including maximum diameter and the tumor stage is determined according to the tumor staging system of American Joint Commission on Cancer. 3D shape of the segmented tumor is reconstructed from series of 2D CT slices for volume measurement. The accuracy of the proposed system is found as 92.72% for 55 randomly selected images from the RIDER Lung CT dataset of Cancer imaging archive. Lower complexity in terms of iterations and connected components as well as better noise robustness are found in comparison with conventional Fuzzy c-means and k-means clustering techniques.

Бесплатно

MLSMBQS: Design of a Machine Learning Based Split & Merge Blockchain Model for QoSAware Secure IoT Deployments

MLSMBQS: Design of a Machine Learning Based Split & Merge Blockchain Model for QoSAware Secure IoT Deployments

Shital Agrawal, Shailesh Kumar

Статья научная

Internet of Things (IoT) Networks are multitier deployments which assist on-field data to be sensed, processed, communicated, and used for taking control decisions. These deployments utilize hardware-based components for data sensing & actuation, while cloud components are used for data-processing & recommending control decisions. This process involves multiple low-security, low-computational capacity & high-performance entities like IoT Devices, short range communication interfaces, edge devices, routers, & cloud virtual machines. Out of these entities, the IoT Device, router, & short-range communication interfaces are highly vulnerable to a wide-variety of attacks including Distributed Denial of Service (DDoS), worm hole, sybil, Man in the Middle (MiTM), Masquerading, spoofing attacks, etc. To counter these attacks, a wide variety of encryption, key-exchange, and data modification models are proposed by researchers. Each of these models have their own levels of complexities, which reduces QoS of underlying IoT deployments. To overcome this limitation, blockchain-based security models were proposed by researchers, and these models allow for high-speed operations for small-scale networks. But as network size is increased, delay needed for blockchain mining increases exponentially, which limits its applicability. To overcome this issue, a machine learning based blockchain model for QoS-aware secure IoT deployments is proposed in this text. The proposed MLSMBQS model initially deploys a Proof-of-Work (PoW) based blockchain model, and then uses bioinspired computing to split the chain into multiple sub-chains. These sub-chains are termed as shards, and assists in reduction of mining delay via periodic chain splitting process. The significance of this research is use of Elephant Herd Optimization (EHO) which assists in managing number of blockchain-shards via splitting or merging them for different deployment conditions. This decision of splitting or merging depends on blockchain’s security & quality of service (QoS) performance. Due to integration of EHO for creation & management of sidechains, the findings of this research showcase that the proposed model is capable of improving throughput by 8.5%, reduce communication delay by 15.3%, reduce energy consumption by 4.9%, and enhance security performance by 14.8% when compared with existing blockchain & non-blockchain based security models. This is possible because EHO initiates dummy communication requests, which are arbitrarily segregated into malicious & non-malicious, and usedfor continuous QoS & security performance improvement of the proposed model. Due to this continuous performance improvement, the proposed MLSMBQS model is capable of deployment for a wide variety of high-efficiency IoT network scenarios.

Бесплатно

Machine Learning Based Decision Support System for Coronary Artery Disease Diagnosis

Machine Learning Based Decision Support System for Coronary Artery Disease Diagnosis

Şükrü Alkan, Muhammed Kürşad UÇAR

Статья научная

Coronary artery disease (CAD) causes millions of deaths worldwide every year. The earliest possible diagnosis is quite important, as in any diseases, for heart diseases causing such a large amount of death. The diagnosis processes have been more successful thanks to the recent studies in medicine and the rapid improvement in computer sciences. In this study, the goal is to employ machine learning methods to facilitate rapid disease diagnosis without the need to observe negative outcomes. The dataset utilized in this study was obtained from an IEEE DataPort data repository. The dataset consists of two classes. Firstly, new features have been produced by using the features in the dataset. Then, datasets that consist of multiple features have been created by using feature selection algorithms. Three models, specifically Support Vector Machines (SVM), the k-Nearest Neighbor algorithm (kNN), and Decision Tree ensembles (EDT), were trained using custom datasets. A hybrid model has been created and the performances have been compared with the other models by using these models. The best performance has been obtained from SVM and its seven performance criteria in order of accuracy, sensitivity, specificity, F- measurement, Kappa and AUC are 97.82, 0.97, 0.99, 0.98, 0.96 and 0.98%. In summary, when evaluating the performance of the constructed models, it has been demonstrated that these recommended models could aid in the swift prediction of coronary artery disease in everyday life.

Бесплатно

Mammogram Pre-processing Using filtering methods for Breast Cancer Diagnosis

Mammogram Pre-processing Using filtering methods for Breast Cancer Diagnosis

Shah Hemali, Agrawal Smita, Parita Oza, Sudeep Tanwar, Ahmed Alkhayyat

Статья научная

Cancer is the second most found disease, and Breast cancer is the most common in women. Breast cancer is curable and can reduce mortality, but it needs to be identified early and treated accordingly. Radiologists use different modalities for the identification of Breast cancer. The superiority of Mammograms over other modalities is like minor radiation exposure and can identify different types of cancers. Therefore, mammograms are the most frequently used imaging modality for Breast Cancer Diagnosis. However, noise can be added while capturing the image, affecting the accuracy and analysis of the result. Therefore, using different filtering techniques to pre-process mammograms can enhance images and improve outcomes. For the study, the MIAS dataset has been used. This paper gives a comparative study on filters for Denoising and enhancement of mammograms. The study focuses on filters like Box Filter, Averaging filter, Gaussian Filter, Identical Filter, Convolutional 2D Filter, Median Filter, and Bilateral Filter. Performance measures used to compare these filters are Mean Squared Error (MSE), Structural Similarity Index Measure (SSIM), and Peak Signal-to-noise Ratio (PSNR). All Performance measures are evaluated for all images of MIAS dataset and compared accordingly. Results show that Gaussian Filter, Median Filter, and Bilateral Filter give better results than other filters.

Бесплатно

Management of vehicular traffic system using artificial bee colony algorithm

Management of vehicular traffic system using artificial bee colony algorithm

Risikat Folashade O. Adebiyi, Kabir Ahmad Abubilal, Abdoulie Momodou Sunkary Tekanyi, Busayo Hadir Adebiyi

Статья научная

In this paper, an Adaptive Dynamic Scheduling Algorithm (ADSA) based on Artificial Bee Colony (ABC) was developed for vehicular traffic control. The developed model optimally scheduled green light timing in accordance with traffic condition in order to minimize the Average Waiting Time (AWT) at the cross intersection. A MATLAB based Graphic User Interface (GUI) traffic control simulator was developed. In order to demonstrate the effectiveness of the developed ADSA this paper was validated with the existing work in the literature. The result obtained for the AWT of the developed ADSA had a performance of 76.67%. While for vehicular queues cleared at the intersection the developed ADSA had a performance of 53.33%. The results clearly expressed that the developed ADSA method has been successful in minimizing the Average Waiting Time and vehicular queues at the intersection.

Бесплатно

Mapping Urban Expansion Due to Special Economic Zones in the United Arab Emirates Using Landsat Archival Data (Case Study Dubai)

Mapping Urban Expansion Due to Special Economic Zones in the United Arab Emirates Using Landsat Archival Data (Case Study Dubai)

Naeema Alhosani

Статья научная

There are presently about 40 special economic zones (SEZs) across the seven federating emirates of the United Arab Emirates. These SEZs include businesses technology hubs, science cities, recreational parks, and media parks. This study aims to document how the SEZs change or affect the transformation of urban expansion and land use practices in the UAE. The study aims to show the new industrial growth and expansion related with land use that emerge around the SEZs compared to before their establishment. The research is based on sequential analysis of temporal geospatial digital maps generated from archival Landsat TM in 2000 and Landsat 8 OLI of 2015. The study area is located around Alqouz and Albarsha localities situated in the western side of the city of Dubai with a vibrant SEZs. The analysis results show remarkable expansion of 23.6% of industrial and warehouse infrastructure coupled with major residential expansion of 18.6%. The findings of this research would help local authorities and corporation in planning for the future of these global businesses and local sustainability in the context of environmental planning and sustainability.

Бесплатно

Mask Region-based Convolution Neural Network (Mask R-CNN) Classification of Alzheimer’s Disease Based on Magnetic Resonance Imaging (MRI)

Mask Region-based Convolution Neural Network (Mask R-CNN) Classification of Alzheimer’s Disease Based on Magnetic Resonance Imaging (MRI)

Anil Kumar Pallikonda, P. Suresh Varma, B. Vivekanandam

Статья научная

Alzheimer's disease is a progressive neurologic disorder that causes the brain to shrink (atrophy) and brain cells to die. A recent study found that 40 million people worldwide suffer from Alzheimer's disease (AD). A few symptoms of this AD disease are problems with language understanding, mood swings, behavioral issues, and short-term memory loss. A key research area for AD is the classification of stages. In this paper, we applied both binary and multi-class classification. In this paper, proposed is a Mask-Region based Convolution Neural Network (R-CNN) for classifying the stages including MCI, LMCI, EMCI, AD, and CN of Alzheimer's Disease. First performing pre-processing by using the skull-stripping algorithm for removing the noise. Second, the patch wise U-Net has been employed to segment the images for improving the classification process. After that, the system's efficiency is examined using MATLAB-based experiments, utilizing images from the Alzheimer's disease Neuroimaging Initiative (ADNI) dataset to evaluate the efficiency in terms of accuracy, precision, recall, specificity, and sensitivity. Our proposed approach to classifying the stages achieves about 98.54%,94.2%, 98.25%, 99.2%, and 99.02%in terms of accuracy with EMCI, CN, MCI, AD, and LMCI respectively. Proposing mask R-CNN with segmentation to classify from CN to AD subjects successfully improved classifier accuracy significantly on the ADNI datasets.

Бесплатно

Mass Detection in Lung CT Images Using Region Growing Segmentation and Decision Making Based on Fuzzy Inference System and Artificial Neural Network

Mass Detection in Lung CT Images Using Region Growing Segmentation and Decision Making Based on Fuzzy Inference System and Artificial Neural Network

Atiyeh Hashemi, Abdol Hamid Pilevar, Reza Rafeh

Статья научная

Lung cancer is distinguished by presenting one of the highest incidences and one of the highest rates of mortality among all other types of cancers. Detecting and curing the disease in the early stages provides the patients with a high chance of survival. This work aims at detecting lung nodules automatically through computerized tomography (CT) image. Accordingly, this article aim at presenting a method to improve the efficiency of the lung cancer diagnosis system, through proposing a region growing segmentation method to segment CT scan lung images. Afterwards, cancer recognition are presenting by Fuzzy Inference System (FIS) for differentiating between malignant, benign and advanced lung nodules. In the following, this paper is testing the diagnostic performances of FIS system by using artificial neural networks (ANNs). Our experiments show that the average sensitivity of the proposed method is 95%.

Бесплатно

Mass Detection in Lung CT Images using Region Growing Segmentation and Decision Making based on Fuzzy Systems

Mass Detection in Lung CT Images using Region Growing Segmentation and Decision Making based on Fuzzy Systems

Hamid bagherieh, Atiyeh Hashemi, Abdol Hamid Pilevar

Статья научная

Lung cancer is distinguished by presenting one of the highest incidences and one of the highest rates of mortality among all other types of cancers. Detecting and curing the disease in the early stages provides the patients with a high chance of survival. In order to help specialists in the search and recognition of the lung nodules in tomography images, a good number of research centers have been developed in computer-aided detection (CAD) systems for automating the procedures. This work aims at detecting lung nodules automatically through computerized tomography images. Accordingly, this article aim at presenting a method to improve the efficiency of the lung cancer diagnosis system, through proposing a region growing segmentation method to segment CT scan lung images and, then, cancer recognition by FIS (Fuzzy Inference System). The proposed method consists of three steps. The first step was pre-processing for enhancing contrast, removing noise, and pictures less corrupted by Linear-Filtering. In second step, the region growing segmentation method was used to segment the CT images. In third step, we have developed an expert system for decision making which differentiates between normal, benign, malignant or advanced abnormality findings. The FIS can be of great help in diagnosing any abnormality in the medical images. This step was done by extracting the features such as area and color (gray values) and given to the FIS as input. This system utilizes fuzzy membership functions which can be stated in the form of if-then rules for finding the type of the abnormality. Finally, the analysis step will be discussed and the accuracy of the method will be determined. Our experiments show that the average sensitivity of the proposed method is more than 95%.

Бесплатно

Matrix-based Kernel Method for Large-scale Data Set

Matrix-based Kernel Method for Large-scale Data Set

Weiya Shi

Статья научная

In the computation process of many kernel methods, one of the important step is the formation of the kernel matrix. But the size of kernel matrix scales with the number of data set, it is infeasible to store and compute the kernel matrix when faced with the large-scale data set. To overcome computational and storage problem for large-scale data set, a new framework, matrix-based kernel method, is proposed. By initially dividing the large scale data set into small subsets, we could treat the autocorrelation matrix of each subset as the special computational unit. A novel polynomial-matrix kernel function is then adopted to compute the similarity between the data matrices in place of vectors. The proposed method can greatly reduce the size of kernel matrix, which makes its computation possible. The effectiveness is demonstrated by the experimental results on the artificial and real data set.

Бесплатно

Mechanism and Algorithm for Indirect Schema Mapping Composition

Mechanism and Algorithm for Indirect Schema Mapping Composition

Bo Wang, Bo Guo

Статья научная

There are a large number of indirect schema mappings between peers in the network. To improve the efficiency of data exchange and queries, indirect mappings are needed to be composed. Direct mappings can be derived directly by the constraints defined between schemas, but not for indirect mappings’ composition. Defined the combination operations of schema elements in indirect mappings, and gave the expression of indirect mappings. Analyzed the composition of indirect mappings, and proposed a strategy, named schema element back, to solve the problem of indirect mapping composition, and gave the indirect mapping composition generation algorithm based on such strategy. Experiments showed that indirect mapping composition can improve the efficiency of data exchange, and compared with other non-full mapping composition generation algorithms, and indirect mapping composition generated by our algorithm based on schema element back strategy can completely eliminate the infection of media schema with no reduction of the composition efficiency.

Бесплатно

Medical Image Denoising Techniques against Hazardous Noises: An IQA Metrics Based Comparative Analysis

Medical Image Denoising Techniques against Hazardous Noises: An IQA Metrics Based Comparative Analysis

Shakil Mahmud Boby, Shaela Sharmin

Статья научная

Medical imaging has become a vital part of the early detection, diagnosis, and treatment of many diseases. That’s why image denoising is considered as a crucial pre-processing step in medical imaging to restore the original image from its noisy circumstance without losing image features, such as edges, corners, and other sharp structures. Ultrasound (US), Computed Tomography (CT), and Magnetic Resonance (MR) are the most widely used medical imaging techniques that are often corrupted by hazardous noises, namely, speckle, salt and pepper, Poisson, and Gaussian. To remove noises from medical images, researchers have proposed several denoising methods. Each method has its assumptions, merits, and demerits. In this paper, a detailed comparative analysis of different denoising filtering techniques, for example, median, Wiener, mean, hybrid median, Gaussian, bilateral, non-local means, and anisotropic diffusion are performed based on four widely-used image quality assessment (IQA) metrics, such as Root Mean Squared Error (RMSE), Peak Signal to Noise Ratio (PSNR), Mean Absolute Error (MAE), and Structural Similarity Index (SSIM). The results obtained in this present work reveal that Gaussian, median, anisotropic diffusion, and non-local means filtering methods perform extraordinarily to denoise speckle, salt and pepper, Poisson, and Gaussian noises, respectively, from all US, CT, and MR images.

Бесплатно

Medical Image Denoising Using Bilateral Filter

Medical Image Denoising Using Bilateral Filter

Devanand Bhonsle, Vivek Chandra, G.R. Sinha

Статья научная

Medical image processing is used for the diagnosis of diseases by the physicians or radiologists. Noise is introduced to the medical images due to various factors in medical imaging. Noise corrupts the medical images and the quality of the images degrades. This degradation includes suppression of edges, structural details, blurring boundaries etc. To diagnose diseases edge and details preservation are very important. Medical image denoising can help the physicians to diagnose the diseases. Medical images include MRI, CT scan, x-ray images, ultrasound images etc. In this paper we implemented bilateral filtering for medical image denoising. Its formulation & implementation are easy but the performance of bilateral filter depends upon its parameter. Therefore for obtaining the optimum result parameter must be estimated. We have applied bilateral filtering on medical images which are corrupted by additive white Gaussian noise with different values of variances. It is a nonlinear and local technique that preserves the features while smoothing the images. It removes the additive white Gaussian noise effectively but its performance is poor in removing salt and pepper noise.

Бесплатно

Журнал