International Journal of Image, Graphics and Signal Processing @ijigsp
Статьи журнала - International Journal of Image, Graphics and Signal Processing
Все статьи: 1157
Left Ventricle Segmentation in Magnetic Resonance Images with Modified Active Contour Method
Статья научная
Desired segmentation of the image is a pivotal problem in image processing. Segmenting the left ventricle (LV) in magnetic resonance images (MRIs) is essential for evaluation of cardiac function. For the segmentation of cardiac MRI several methods have been proposed and implemented. Each of them has advantages and restrictions. A modified region-based active contour model was applied for segmentation of LV chamber. A new semi-automatic algorithm was suggested calculating the appropriate Balloon force according to mean intensity of the region of interest for each image. The database is included of 2,039 MR images collected from 18 children under 18. The results were compared with previous literatures according to two standards: Dice Metric (DM) and Point to Curve (P2C). The obtained segmentation results are better than previously reported values in several literatures. In this study different points were used in cardiac cycle and several slice levels and classified into three levels: Base, Mid. and Apex. The best results were obtained at end diastole (ED) in comparison with end systole (ES), and on base slice than other slices, because of LV bigger size in ED phase and base slice. With segmentation of LV MRI based on novel active contour and application of the suggested algorithm for balloon force calculation, the mean improvement of DM compared to Grosgeorge et al. is 19.6% in ED and 49.5% in ES phase. The mean improvement of P2C compared with the same literature respectively for ED and ES phase is 43.8% and 39.6%.
Бесплатно
Статья научная
Image segmentation is one of the most important steps in computer vision and image processing. Image segmentation is dividing the image into meaningful regions based on similarity pixels. We propose a new segmentation algorithm based on de-noising of images, good segmentation results depends on the noisy free images. This means that, we may not get the proper segmentation results in the presence of noise. For this, image pre-processing stage is necessary to denoise the image. An image segmentation result depends on the pre-processing results. In this paper, proposed a new integrating approach based on de-noising and segmentation which is called Level Set Segmentation of Images using Block Matching Local SVD Operator Based Sparsity and TV Regularization (BMLSVD-TV). The proposed method is dividing into two stages, in the first stage images are de-noised based on BMLSVDTV algorithm. De-noising images is a crucial aspect of image processing, there are a few factors to keep in mind during image de-noising such as smoothing the flat areas, safeguarding the edges without blurring, and keeping the textures and new artifacts should not be created. Block Matching, Updating of basis vector, Sparsity regularization, and TV regularization. This method searches for blocks that are comparable to each other in block matching. The data in the array demonstrates a high level of correlation after the matching blocks are grouped together. The sparse coefficients will be gathered after adequate modification. Most of the noise in the image will be minimized through the sparsity regularization step by employing different de-noising algorithms such as Block matching 3D using fixed basis vectors. The edge information will be retained and the piecewise smoothness of the image will be produced using the TV regularization step. Later, in the second state create a contour on the de-noised image and evolve the contour based on level Set function (LSF) defined. This combined approach gives better performance for segmenting the image regions over existing level set methods. When compared our proposed level set method over state of art level set methods. The proposed segmentation method is superior in terms of no.of iterations, CPU time and area covered over the existing level set methods. By this model, we obtained a good quality of restored image from noisy image and the performance of the image quality assessed by the two important parameters such as PSNR and Mean Square Error (MSE). The higher value of PSNR and lower value of MSE leads to good quality of image. In this research work, the proposed denoising method got higher PSNR values over existing methods. Where recovering the original image content is essential for effective performance, image denoising is a key component. It is used in a variety of applications, including image restoration, visual tracking, image registration, image segmentation, and image classification. This model is the best segmentation method for accurate segmentation of objects based on denoising images when compared with the other models in the field.
Бесплатно
Level Sets based Directed Surface Extraction
Статья научная
Directed surface extraction from CT images is the first task in the design of medical equipment. In this paper a new approach based on level set method is proposed to extract the directed surface from CT images. Two level set functions with corresponding speed functions are involved in this study. One is used to cut the desired bone from the input CT model in which the directed surface, usually the outermost surface, and the complex inner surface are both contained. The other is used to remove the complex inner surface. The experimental results show the feasible of the proposed method.
Бесплатно
Статья научная
Automatic Recognition of Diseased Cotton Plant and Leaves (ARDCPL) using Deep Learning (DL) carries a greater significance in agricultural research. The cotton plant and leaves are severely infected by a disease named Bacterial Blight-affected by bacterium, Xanthomonas axonopodis pv. Malvacearum and a new rolling leaf disease affected by an unorthodox leaf roll dwarf virus. Existing research in ARDCPL requires various complicated image preprocessing, feature extraction approaches and cannot ensure higher accuracy in their detection rates. This work suggests a Deep Convolutional Neural Network (CNN) based DCPLD-CNN model that achieves a higher accuracy by leveraging the DL models ability to extract features from images automatically. Due to the enormous success of numerous pre-trained architectures regarding several image classification task, this study also explores eight CNN based pre-trained architectures: DenseNet121, NasNetLarge, VGG16, VGG19, ResNet50, InceptionV3, InceptionResNetV2, and Xception models by Fine-Tuning them using Transfer Learning (TL) to recognize diseased cotton plant and leaves. This study utilizes those pre-trained architectures by adding extra dense layers in the last layers of those models. Several Image Data Augmentation (IDA) methods were used to expand the training data to increase the model's generalization capability and reduce overfitting. The proposed DCPLD-CNN model achieves an accuracy of 98.77% in recognizing disease in cotton plant and leaves. The customized DenseNet121 model achieved the highest accuracy of 98.60% amongst all the pre-trained architectures. The proposed method's feasibility and practicality were exhibited by several simulated experimental results for this classification task.
Бесплатно
Leveraging Deep Learning Approach for the Detection of Human Activities from Video Sequences
Статья научная
Using deep learning approaches, recognizing human actions from video sequences by automatically deriving significant representations has demonstrated effective results from unprocessed video information. Artificial intelligence (AI) systems, including monitoring, automation, and human-computer interface, have become crucial for security and human behaviour analysis. For the visual depiction of video clips during the training phase, the existing action identification algorithms mostly use pre-trained weights of various AI designs, which impact the characteristics discrepancies and perseverance, including the separation among the visual and temporal indicators. The research proposes a 3-dimensional Convolutional Neural Network and Long Short-Term Memory (3D-CNN-LSTM) network that strategically concentrates on useful information in the input frame to recognize the various human behaviours in the video frames to overcome this problem. The process utilizes stochastic gradient descent (SGD) optimization to identify the model parameters that best match the expected and observed outcomes. The proposed framework is trained, validated, and tested using publicly accessible UCF11 benchmark dataset. According to the experimental findings of this work, the accuracy rate was 93.72%, which is 2.42% higher compared to the state-of-the-art previous best result. When compared to several other relevant techniques that are already in use, the suggested approach achieved outstanding performance in terms of accuracy.
Бесплатно
Статья научная
With aiming to design a novel image watermarking technique, this paper presents a novel method of image watermarking using lifting wavelet transform, discrete wavelet transform, and one-dimensional linear discriminate analysis. In this blind watermarking technique, statistical features of the watermarked image have been incorporated for preparing the training set and testing set. After that, the principal component analysis is applied to reduce the obtained feature set, so the training time is reduced to the desired level and accuracy is enhanced. The one-dimensional linear discriminate analysis is used for binary classification as it has the ability to classify with good accuracy. This technique applies discrete wavelet transform and lifting wavelet transform in two different watermarking schemes for the image transformation. Both transformations give higher tolerance against image distortion than other conventional transformation methods. One of the significant challenges of a watermarking technique is maintaining the proper balance between robustness and imperceptibility. The proposed blind watermarking technique exhibits the imperceptibility of 43.70 dB for Lena image in case of no attack for the first scheme (using LWT) and 44.71 dB for the second scheme (using DWT+LWT). The first watermarking scheme is tested for robustness, and it is seen that the given scheme is performing well against most of the image attacks in terms of robustness. This technique is compared using some existing similar watermarking methods, and it is found to be robust against most image attacks. It also maintains the excellent quality of the watermarked image.
Бесплатно
LiteDVDNet: Optimizing FastDVDNet for High-Speed Video Denoising
Статья научная
The growing demand for high-quality video processing in real-time applications demands efficient denoising techniques that can operate swiftly while maintaining visual fidelity. Conventional approaches often struggle to balance these competing requirements, especially when dealing with high-resolution video streams or resource-constrained environments. This study aims to develop methods for accelerating video data denoising using deep convolutional neural networks while maintaining acceptable output quality. We selected the popular FastDVDNet denoising network, which operates on a sliding window principle, as our baseline for comparison and a starting point for our research. This paper proposes several modifications of FastDVDNet that significantly enhance computational efficiency. We introduce four key optimizations: caching intermediate denoising results, reducing intermediate channels in input block, simplifying convolutional blocks, and halving the number of channels. We evaluated these modifications on the Set8 dataset and compared the results with the original model at various noise levels. Finally, we introduce LiteDVDNet, a fine-tuned version of FastDVDNet model that achieves the optimal balance between processing speed, and denoising performance. We developed two model variants: LiteDVDNet-32, which is 3× faster than the original model with only 0.18 dB average PSNR reduction, and the more lightweight LiteDVDNet-16, which delivers a 5× speed improvement at the cost of 0.61 dB average PSNR reduction.
Бесплатно
Local Content Based Image Authentication for Tamper Localization
Статья научная
Digital images make up a large component in the multimedia information. Hence Image authentication has attained a great importance and lead to the development of several image authentication algorithms. This paper proposes a block based watermarking scheme for image authentication based on the edge information extracted from each block. A signature is calculated from each edge block of the image using simple hash function and inserted in the same block. The proposed local edge based content hash (LECH) scheme extracts the original image without any distortion from the marked image after the hidden data have been extracted. It can also detect and localize tampered areas of the watermarked image. Experimental results demonstrate the validity of the proposed scheme.
Бесплатно
Local Entropy-based Non Blind Robust Image Watermarking: Case of Medical Images
Статья научная
Medical image, watermarking, spatial domain, local entropy, imperceptibility, robustness
Бесплатно
Local binary pattern family descriptors for texture classification
Статья научная
Texture classification is widely employed in many computer vision and pattern recognition applications. Texture classification is performed in two phases namely feature extraction and classification. Several feature extraction methods and feature descriptors have been proposed and local binary pattern (LBP) has attained much attraction due to their simplicity and ease of computation. Several variants of LBP have been proposed in literature. This paper presents a performance evaluation of LBP based feature descriptors namely LBP, uniform LBP (ULBP), LBP variance (LBPV), LBP Fourier histogram, rotated LBP (RLBP) and dominant rotation invariant LBP (DRLBP). For performance evaluation, nearest neighbor classifier is employed. The benchmark OUTEX texture database is used for performance evaluation in terms of classification accuracy and runtime.
Бесплатно
Lossless Image Compression Using A Simplified MED Algorithm with Integer Wavelet Transform
Статья научная
In this paper, we propose a lossless (LS) image compression technique combining a prediction step with the integer wavelet transform. The prediction step proposed in this technique is a simplified version of the median edge detector algorithm used with JPEG-LS. First, the image is transformed using the prediction step and a difference image is obtained. The difference image goes through an integer wavelet transform and the transform coefficients are used in the lossless codeword assignment. The algorithm is simple and test results show that it yields higher compression ratios than competing techniques. Computational cost is also kept close to competing techniques.
Бесплатно
Lossy Compression Color Medical Image Using CDF Wavelet Lifting Scheme
Статья научная
As the coming era is that of digitized medical information, an important challenge to deal with is the storage and transmission requirements of enormous data, including color medical images. Compression is one of the indispensable techniques to solve this problem. In this work, we propose an algorithm for color medical image compression based on a biorthogonal wavelet transform CDF 9/7 coupled with SPIHT coding algorithm, of which we applied the lifting structure to improve the drawbacks of wavelet transform. In order to enhance the compression by our algorithm, we have compared the results obtained with wavelet based filters bank. Experimental results show that the proposed algorithm is superior to traditional methods in both lossy and lossless compression for all tested color images. Our algorithm provides very important PSNR and MSSIM values for color medical images.
Бесплатно
Loudspeaker operation status monitoring system based on power line communication technology
Статья научная
With the rapid development of science and technology, intelligent systems have been applied to various fields. A monitoring system for the operating status of loudspeakers based on power line communication was designed and implemented. In this paper, firstly analyzes the deficiencies of previous research, and then according to the actual situation, it is concluded that the power line communication technology is more suitable for loudspeaker operating status monitoring than other communication technologies. The overall design, hardware design and software design of the entire system was introduced. And in the last, the reliability of the system were proved by many experiments. This system can be used in other applications in addition to the monitoring of the operating status of the loudspeakers.
Бесплатно
Статья научная
Low-light scenes are characterized by the loss of illumination, the noise, the color distortion and serious information degradation. The low-light image enhancement is a significant part of computer vision technology. The low-light image enhancement methods aim to an image recover to a normal-light image from dark one, a noise-free image from a noisy one, a clear image from distorting one. In this paper, the low-light image enhancement technology based on Retinex-based deep network combined with the image processing-based module is proposed. The proposed technology combines the use of traditional and deep learning methodologies, designed within a simple yet efficient architectural framework that focuses on essential feature extraction. The proposed preprocessing module of low-light image enhancement is centered on the unique knowledge and features of an image. The choice of a color model and a technique of an image transformation depends on an image dynamic range to ensure high results in terms of transfer a color, detail integrity and overall visual quality. The proposed Retinex-based deep network has been trained and tested on transformed images by means of preprocessing module that leads to an effective supervised approach to low-light image enhancement and provide superior performance. The proposed preprocessing module is implemented as an independent image enhancement module in a computer system of an image analysis and as the component module in a neural network system of an image analysis. Experimental results on the low light paired dataset show that the proposed method can reduce noise and artifacts in low-light images, and can improve contrast and brightness, demonstrating its advantages. The proposed approach injects new ideas into low light image enhancement, providing practical applications in challenging low-light scenarios.
Бесплатно
Luminance-based Embedding Approach for Color Image Watermarking
Статья научная
In this paper a new non-blind luminance-based color image watermarking technique is proposed. The original 512×512 color host image is divided into 8×8 blocks, and each block is converted to YCbCr color space. A 32×32 monochrome image is used as a watermark and embedded in the selected blocks of the original image. The selected blocks must have log-average luminance that is closer to the log-average luminance of the image. DCT transform is applied to the Y component of each selected block. Each four values of the watermark image are embedded into each selected block of the host image. The watermark values are embedded in the first four AC coefficients leaving the DC value unchanged. The watermark is extracted from the watermarked image using the same selected blocks and DCT coefficients that have been used in the embedding process. This approach is tested against variety of attacks and filters: such as, highpass, lowpass, Gaussian, median, salt and peppers, and JPEG compression. The proposed approach shows a great ability to preserve the watermark against these attacks.
Бесплатно
Lung Tumor Segmentation and Staging from CT Images Using Fast and Robust Fuzzy C-Means Clustering
Статья научная
Lung tumor is the result of abnormal and uncontrolled cell division and growth in lung region. Earlier detection and staging of lung tumor is of great importance to increase the survival rate of the suffered patients. In this paper, a fast and robust Fuzzy c-means clustering method is used for segmenting the tumor region from lung CT images. Morphological reconstruction process is performed prior to Fuzzy c-means clustering to achieve robustness against noises. The computational efficiency is improved through median filtering of membership partition. Tumor masks are then reconstructed using surface based and shape based filtering. Different features are extracted from the segmented tumor region including maximum diameter and the tumor stage is determined according to the tumor staging system of American Joint Commission on Cancer. 3D shape of the segmented tumor is reconstructed from series of 2D CT slices for volume measurement. The accuracy of the proposed system is found as 92.72% for 55 randomly selected images from the RIDER Lung CT dataset of Cancer imaging archive. Lower complexity in terms of iterations and connected components as well as better noise robustness are found in comparison with conventional Fuzzy c-means and k-means clustering techniques.
Бесплатно
Статья научная
Internet of Things (IoT) Networks are multitier deployments which assist on-field data to be sensed, processed, communicated, and used for taking control decisions. These deployments utilize hardware-based components for data sensing & actuation, while cloud components are used for data-processing & recommending control decisions. This process involves multiple low-security, low-computational capacity & high-performance entities like IoT Devices, short range communication interfaces, edge devices, routers, & cloud virtual machines. Out of these entities, the IoT Device, router, & short-range communication interfaces are highly vulnerable to a wide-variety of attacks including Distributed Denial of Service (DDoS), worm hole, sybil, Man in the Middle (MiTM), Masquerading, spoofing attacks, etc. To counter these attacks, a wide variety of encryption, key-exchange, and data modification models are proposed by researchers. Each of these models have their own levels of complexities, which reduces QoS of underlying IoT deployments. To overcome this limitation, blockchain-based security models were proposed by researchers, and these models allow for high-speed operations for small-scale networks. But as network size is increased, delay needed for blockchain mining increases exponentially, which limits its applicability. To overcome this issue, a machine learning based blockchain model for QoS-aware secure IoT deployments is proposed in this text. The proposed MLSMBQS model initially deploys a Proof-of-Work (PoW) based blockchain model, and then uses bioinspired computing to split the chain into multiple sub-chains. These sub-chains are termed as shards, and assists in reduction of mining delay via periodic chain splitting process. The significance of this research is use of Elephant Herd Optimization (EHO) which assists in managing number of blockchain-shards via splitting or merging them for different deployment conditions. This decision of splitting or merging depends on blockchain’s security & quality of service (QoS) performance. Due to integration of EHO for creation & management of sidechains, the findings of this research showcase that the proposed model is capable of improving throughput by 8.5%, reduce communication delay by 15.3%, reduce energy consumption by 4.9%, and enhance security performance by 14.8% when compared with existing blockchain & non-blockchain based security models. This is possible because EHO initiates dummy communication requests, which are arbitrarily segregated into malicious & non-malicious, and usedfor continuous QoS & security performance improvement of the proposed model. Due to this continuous performance improvement, the proposed MLSMBQS model is capable of deployment for a wide variety of high-efficiency IoT network scenarios.
Бесплатно
Machine Learning Based Decision Support System for Coronary Artery Disease Diagnosis
Статья научная
Coronary artery disease (CAD) causes millions of deaths worldwide every year. The earliest possible diagnosis is quite important, as in any diseases, for heart diseases causing such a large amount of death. The diagnosis processes have been more successful thanks to the recent studies in medicine and the rapid improvement in computer sciences. In this study, the goal is to employ machine learning methods to facilitate rapid disease diagnosis without the need to observe negative outcomes. The dataset utilized in this study was obtained from an IEEE DataPort data repository. The dataset consists of two classes. Firstly, new features have been produced by using the features in the dataset. Then, datasets that consist of multiple features have been created by using feature selection algorithms. Three models, specifically Support Vector Machines (SVM), the k-Nearest Neighbor algorithm (kNN), and Decision Tree ensembles (EDT), were trained using custom datasets. A hybrid model has been created and the performances have been compared with the other models by using these models. The best performance has been obtained from SVM and its seven performance criteria in order of accuracy, sensitivity, specificity, F- measurement, Kappa and AUC are 97.82, 0.97, 0.99, 0.98, 0.96 and 0.98%. In summary, when evaluating the performance of the constructed models, it has been demonstrated that these recommended models could aid in the swift prediction of coronary artery disease in everyday life.
Бесплатно
Mammogram Pre-processing Using filtering methods for Breast Cancer Diagnosis
Статья научная
Cancer is the second most found disease, and Breast cancer is the most common in women. Breast cancer is curable and can reduce mortality, but it needs to be identified early and treated accordingly. Radiologists use different modalities for the identification of Breast cancer. The superiority of Mammograms over other modalities is like minor radiation exposure and can identify different types of cancers. Therefore, mammograms are the most frequently used imaging modality for Breast Cancer Diagnosis. However, noise can be added while capturing the image, affecting the accuracy and analysis of the result. Therefore, using different filtering techniques to pre-process mammograms can enhance images and improve outcomes. For the study, the MIAS dataset has been used. This paper gives a comparative study on filters for Denoising and enhancement of mammograms. The study focuses on filters like Box Filter, Averaging filter, Gaussian Filter, Identical Filter, Convolutional 2D Filter, Median Filter, and Bilateral Filter. Performance measures used to compare these filters are Mean Squared Error (MSE), Structural Similarity Index Measure (SSIM), and Peak Signal-to-noise Ratio (PSNR). All Performance measures are evaluated for all images of MIAS dataset and compared accordingly. Results show that Gaussian Filter, Median Filter, and Bilateral Filter give better results than other filters.
Бесплатно
Management of vehicular traffic system using artificial bee colony algorithm
Статья научная
In this paper, an Adaptive Dynamic Scheduling Algorithm (ADSA) based on Artificial Bee Colony (ABC) was developed for vehicular traffic control. The developed model optimally scheduled green light timing in accordance with traffic condition in order to minimize the Average Waiting Time (AWT) at the cross intersection. A MATLAB based Graphic User Interface (GUI) traffic control simulator was developed. In order to demonstrate the effectiveness of the developed ADSA this paper was validated with the existing work in the literature. The result obtained for the AWT of the developed ADSA had a performance of 76.67%. While for vehicular queues cleared at the intersection the developed ADSA had a performance of 53.33%. The results clearly expressed that the developed ADSA method has been successful in minimizing the Average Waiting Time and vehicular queues at the intersection.
Бесплатно