Статьи журнала - International Journal of Intelligent Systems and Applications
Все статьи: 1159
Data Analysis for the Aero Derivative Engines Bleed System Failure Identification and Prediction
Статья научная
Middle size gas/diesel aero-derivative power generation engines are widely used on various industrial plants in the oil and gas industry. Bleed of Valve (BOV) system failure is one of the failure mechanisms of these engines. The BOV is part of the critical anti-surge system and this kind of failure is almost impossible to identify while the engine is in operation. If the engine operates with BOV system impaired, this leads to the high maintenance cost during overhaul, increased emission rate, fuel consumption and loss in the efficiency. This paper proposes the use of readily available sensor data in a Supervisory Control and Data Acquisition (SCADA) system in combination with a machine learning algorithm for early identification of BOV system failure. Different machine learning algorithms and dimensionality reduction techniques are evaluated on real world engine data. The experimental results show that Bleed of Valve systems failures could be effectively predicted from readily available sensor data.
Бесплатно
Data Clustering Using Wave Atom
Статья научная
Clustering of huge spatial databases is an important issue which tries to track the densely regions in the feature space to be used in data mining, knowledge discovery, or efficient information retrieval. Clustering approach should be efficient and can detect clusters of arbitrary shapes because spatial objects cannot be simply abstracted as isolated points they have different boundary, size, volume, and location. In this paper we use discrete wave atom transformation technique in clustering to achieve more accurate result .By using multi-resolution transformation like wavelet and wave atom we can effectively identify arbitrary shape clusters at different degrees of accuracy. Experimental results on very large data sets show the efficiency and effectiveness of the proposed wave atom bases clustering approach compared to other recent clustering methods. Experimental result shows that we get more accurate result and denoised output than others.
Бесплатно
Data Mining of Students’ Performance: Turkish Students as a Case Study
Статья научная
Artificial neural networks have been used in different fields of artificial intelligence, and more specifically in machine learning. Although, other machine learning options are feasible in most situations, but the ease with which neural networks lend themselves to different problems which include pattern recognition, image compression, classification, computer vision, regression etc. has earned it a remarkable place in the machine learning field. This research exploits neural networks as a data mining tool in predicting the number of times a student repeats a course, considering some attributes relating to the course itself, the teacher, and the particular student. Neural networks were used in this work to map the relationship between some attributes related to students’ course assessment and the number of times a student will possibly repeat a course before he passes. It is the hope that the possibility to predict students’ performance from such complex relationships can help facilitate the fine-tuning of academic systems and policies implemented in learning environments. To validate the power of neural networks in data mining, Turkish students’ performance database has been used; feedforward and radial basis function networks were trained for this task. The performances obtained from these networks were evaluated in consideration of achieved recognition rates and training time.
Бесплатно
Data Quality for AI Tool: Exploratory Data Analysis on IBM API
Статья научная
A huge amount of data is produced in every domain these days. Thus for applying automation on any dataset, the appropriately trained data plays an important role in achieving efficient and accurate results. According to data researchers, data scientists spare 80% of their time in preparing and organizing the data. To overcome this tedious task, IBM Research has developed a Data Quality for AI tool, which has varieties of metrics that can be applied to different datasets (in .csv format) to identify the quality of data. In this paper, we will be representing how the IBM API toolkit will be useful for different variants of datasets and showcase the results for each metrics in graphical form. This paper might be found useful for the readers to understand the working flow of the IBM data purifier tool, thus we have represented the entire flow of how to use IBM data quality for the AI toolkit in the form of architecture.
Бесплатно
Data Visualization and its Proof by Compactness Criterion of Objects of Classes
Статья научная
In this paper considered the problem of reducing the dimension of the feature space using nonlinear mapping the object description on numerical axis. To reduce the dimensionality of space used by rules agglomerative hierarchical grouping of different - type (nominal and quantitative) features. Groups do not intersect with each other and their number is unknown in advance. The elements of each group are mapped on the numerical axis to form a latent feature. The set of latent features would be sorted by the informativeness in the process of hierarchical grouping. A visual representation of objects obtained by this set or subset is used as a tool for extracting hidden regularities in the databases. The criterion for evaluating the compactness of the class objects is based on analyzing the structure of their connectivity. For the analysis used an algorithm partitioning into disjoint classes the representatives of the group on defining subsets of boundary objects. The execution of algorithm provides uniqueness of the number of groups and their member objects in it. The uniqueness property is used to calculate the compactness measure of the training samples. The value of compactness is measured with dimensionless quantities in the interval of [0, 1]. There is a need to apply of dimensionless quantities for estimating the structure of feature space. Such a need exists at comparing the different metrics, normalization methods and data transformation, selection and removing the noise objects.
Бесплатно
Статья научная
This paper proposes a data-driven approximation of the Cumulative Distribution Function using the Finite Mixtures of the Cumulative Distribution Function of Logistic distribution. Since it is not possible to solve the logistic mixture model using the Maximum likelihood method, the mixture model is modeled to approximate the empirical cumulative distribution function using the computational intelligence algorithms. The Probability Density Function is obtained by differentiating the estimate of the Cumulative Distribution Function. The proposed technique estimates the Cumulative Distribution Function of different benchmark distributions. Also, the performance of the proposed technique is compared with the state-of-the-art kernel density estimator and the Gaussian Mixture Model. Experimental results on κ−μ distribution show that the proposed technique performs equally well in estimating the probability density function. In contrast, the proposed technique outperforms in estimating the cumulative distribution function. Also, it is evident from the experimental results that the proposed technique outperforms the state-of-the-art Gaussian Mixture model and kernel density estimation techniques with less training data.
Бесплатно
Статья научная
It is quite common to have access to geospatial (temporal/spatial) panel data generated by a set of similar data for analyses in a meta-data setup. Within this context, researchers often employ pooling methods to evaluate the efficacy of meta-data analysis. One of the simplest techniques used to combine individual-study results is the fixed-effects model, which assumes that a true-effect is equal for all studies. An alternative, and intuitively-more-appealing method, is the random-effects model. A paper was presented by the first author, and his co-authors addressing the efficient estimation problem, using this method in the aforesaid meta-data setup of the ‘Geospatial Data’ at hand, in Map World Forum meeting in 2007 at Hyderabad; INDIA. The purpose of this paper had been to address the estimation problem of the fixed-effects model and to present a simulation study of an efficient confidence-interval estimation of a mean true-effect using the panel-data and a random-effects model, too in order to establish appropriate ‘confidence interval’ estimation for being readily usable in a decision-makers’ setup. The present paper continues the same perspective, and proposes a much more efficient estimation strategy furthering the gainful use of the ‘Geospatial Panel-Data’ in the Global/Continental/ Regional/National contexts of “Socioeconomic & other Developmental Issues’. The ‘Statistical Efficient Confidence Interval Estimation Theme’ of the paper(s) has a wider ambit than its applicability in the context of ‘Socioeconomic Development’ only. This ‘Statistical Theme’ is, as such, equally gainfully applicable to any area of application in the present world-order at large inasmuch as the “Data-Mapping” in any context, for example, the issues in the topically significant area of “Global Environmental Pollution-Mitigation for Arresting the Critical phenomenon of Global Warming”. Such similar issues are tackle-able more readily, as the impactful advances in the “GIS & GPS” technologies have led to the concept of “Managing Global Village” in terms of ‘Geospatial Meta-Data’. This last fact has been seminal to special zeal-n-motivation to the authors to have worked for this improved paper containing rather a much more efficient strategy of confidence-interval estimation for decision-making team of managers for any impugned area of application.
Бесплатно
Статья научная
In the paper the deep hybrid system of computational intelligence with architecture adaptation for medical fuzzy diagnostics is proposed. This system allows to increase a quality of medical information processing under the condition of overlapping classes due to special adaptive architecture and training algorithms. The deep hybrid system under consideration can tune its architecture in situation when number of features and diagnoses can be variable. The special algorithms for its training are developed and optimized for situation of different system architectures without retraining of synaptic weights that have been tuned at previous steps. The proposed system was used for processing of three medical data sets (dermatology dataset, Pima Indians diabetes dataset and Parkinson disease dataset) under the condition of fixed number of features and diagnoses and in situation of its increasing. A number of conducted experiments have shown high quality of medical diagnostic process and confirmed the efficiency of the deep hybrid system of computational intelligence with architecture adaptation for medical fuzzy diagnostics.
Бесплатно
Deep Learning Sign Language Recognition System Based on Wi-Fi CSI
Статья научная
Many sensing gesture recognition systems based on Wi-Fi signals are introduced because of the commercial off-the-shelf Wi-Fi devices without any need for additional equipment. In this paper, a deep learning-based sign language recognition system is proposed. Wi-Fi CSI amplitude and phase information is used as input to the proposed model. The proposed model uses three types of deep learning: CNN, LSTM, and ABLSTM with a complete study of the impact of optimizers, the use of amplitude and phase of CSI, and preprocessing phase. Accuracy, F-score, Precision, and recall are used as performance metrics to evaluate the proposed model. The proposed model achieves 99.855%, 99.674%, 99.734%, and 93.84% average recognition accuracy for the lab, home, lab + home, and 5 different users in a lab environment, respectively. Experimental results show that the proposed model can effectively detect sign gestures in complex environments compared with some deep learning recognition models.
Бесплатно
Статья научная
Adversarial attacks can be extremely dangerous, particularly in scenarios where the precision of facial expression identification is of utmost importance. Hiring adversarial training methods proves effective in mitigating these threats. Although effective, this technique requires large computing resources. This study aims to strengthen deep learning model resilience against adversarial attacks while optimizing performance and resource efficiency. Our proposed method uses adversarial training techniques to create adversarial examples, which are permanently stored as a separate dataset. This strategy helps the model learn and enhances its resilience to adversarial attacks. This study also evaluates models by subjecting them to adversarial attacks, such as the One Pixel Attack and the Fast Gradient Sign Method, to identify any potential vulnerabilities. Moreover, we use two different model architectures to see how well they are protected against adversarial attacks. It compared their performances to determine the best model for making systems more resistant while still maintaining good performance. The findings show that the combination of the proposed adversarial training technique and an efficient model architecture outcome in increased resistance to adversarial attacks. This also improves the reliability of the model and saves more resources for computation. This is evidenced by the high accuracy results achieved at 98.81% accuracy on the CK+ datasets. The adversarial training technique proposed in this study offers an efficient alternative to overcome the limitations of computational resources. This fortifies the model against adversarial attacks, resulting in significant increases in model resilience without loss of performance.
Бесплатно
Deep Learning in Character Recognition Considering Pattern Invariance Constraints
Статья научная
Character recognition is a field of machine learning that has been under research for several decades. The particular success of neural networks in pattern recognition and therefore character recognition is laudable. Research has also long shown that a single hidden layer network has the capability to approximate any function; while, the problems associated with training deep networks therefore led to little attention given to it. Recently, the breakthrough in training deep networks through various pre-training schemes have led to the resurgence and massive interest in them, significantly outperforming shallow networks in several pattern recognition contests; moreover the more elaborate distributed representation of knowledge present in the different hidden layers concords with findings on the biological visual cortex. This research work reviews some of the most successful pre-training approaches to initializing deep networks such as stacked auto encoders, and deep belief networks based on achieved error rates. More importantly, this research also parallels investigating the performance of deep networks on some common problems associated with pattern recognition systems such as translational invariance, rotational invariance, scale mismatch, and noise. To achieve this, Yoruba vowel characters databases have been used in this research.
Бесплатно
Defect Analysis Using Artificial Neural Network
Статья научная
This paper deals with detection of defects in the manufactured ceramic tiles to ensure high density quality. The problem is concerned with the automatic inspection of ceramic tiles using Artificial Neural Network (ANN). The performance of the technique has been evaluated theoretically and experimentally on samples. Architecture of the system involves binary matrix processing and utilization of Artificial Neural Network (ANN) to detect defects. The above automatic inspection procedures have been implemented and tested on company floor tiles. The results obtained confirmed the efficiency of the methodology in defect detection in raw tile and its relevance as a promising approach on matrix, as well as included in quality control and inspection programs.
Бесплатно
Defuzzification Index for Ranking of Fuzzy Numbers on the Basis of Geometric Mean
Статья научная
The importance of fuzzy numbers to express uncertainty in certain applications, concerned with decision making, is observed in a large number of problems of different kinds. In Decision making problems, the best of available alternatives is chosen to the possible extent. In the process of ordering the alternatives, ranking of fuzzy numbers plays a key role. A large volume of ranking methods, based on different features, have been available in this domain. Owing to the complicated nature of fuzzy numbers, the so far introduced methods suffered setbacks or posed difficulties or showed drawbacks in one context or other. In addition, some methods are lengthy and complicated to apply on concerned problems. In this article, a new ranking procedure based on defuzzification, stemmed from the concepts of geometric mean and height of a fuzzy number, is proposed. Finally, numerical comparisons are made with other existing procedures for testing and validation of proposed method with the support of some standard numerical examples.
Бесплатно
Delay Computation Using Fuzzy Logic Approach
Статья научная
The paper presents practical application of fuzzy sets and system theory in predicting delay, with reasonable accuracy, a wide range of factors pertaining to construction projects. In this paper we shall use fuzzy logic to predict delays on account of Delayed supplies and Labor shortage. It is observed that the project scheduling software use either deterministic method or probabilistic method for computation of schedule durations, delays, lags and other parameters. In other words, these methods use only quantitative inputs leaving-out the qualitative aspects associated with individual activity of work. The qualitative aspect viz., the expertise of the mason or the lack of experience can have a significant impact on the assessed duration. Such qualitative aspects do not find adequate representation in the Project Scheduling software. A realistic project is considered for which a PERT chart has been prepared using showing all the major activities in reasonable detail. This project has been periodically updated until its completion. It is observed that some of the activities are delayed due to extraneous factors resulting in the overall delay of the project. The software has the capability to calculate the overall delay through CPM (Critical Path Method) when each of the activity-delays is reported. We shall now demonstrate that by using fuzzy logic, these delays could have been predicted well in advance.
Бесплатно
Density based initialization method for k-means clustering algorithm
Статья научная
Data clustering is a basic technique to show the structure of a data set. K-means clustering is a widely acceptable method of data clustering, which follow a partitioned approach for dividing the given data set into non-overlapping groups. Unfortunately, it has the pitfall of randomly choosing the initial cluster centers. Due to its gradient nature, this algorithm is highly sensitive to the initial seed value. In this paper, we propose a kernel density-based method to compute an initial seed value for the k-means algorithm. The idea is to select an initial point from the denser region because they truly reflect the property of the overall data set. Subsequently, we are avoiding the selection of outliers as an initial seed value. We have verified the proposed method on real data sets with the help of different internal and external validity measures. The experimental analysis illustrates that the proposed method has better performance over the k-means, k-means++ algorithm, and other recent initialization methods.
Бесплатно
Статья научная
The amount of cane fiber carried by cane carrier varies due to non-uniformity of cane supply. The continuous variation of cane in chute during the cane juice extraction inversely affects the cane juice extraction efficiency of mill. This paper deals with the development and performance comparison of conventional controller and fuzzy controller to maintain the cane level in chute during juice extraction. The conventional controller is developed using VHDL language and simulated by using Xilinx ISE 14.5 version. The fuzzy controller is developed and simulated by using fuzzy toolbox of ‘MATLAB® version 7.11.0.584 (R2020b).
Бесплатно
Design High Impact Fuzzy Baseline Variable Structure Methodology to Artificial Adjust Fuel Ratio
Статья научная
This paper expands a Multi Input Multi Output (MIMO) fuzzy baseline variable structure control (VSC) which controller coefficient is off-line tuned by gradient descent algorithm. The main goal is to adjust the optimal value for fuel ratio (FR) in motor engine. The fuzzy inference system in proposed methodology is works based on Mamdani-Lyapunov fuzzy inference system (FIS). To reduce dependence on the gain updating factor coefficients of the fuzzy methodology, PID baseline method is introduced. This new method provides an optimal setting for other factors which crated by PID baseline method. The gradient descent methodology is off-line tune all coefficients of baseline fuzzy and variable structure function based on mathematical optimization methodology. The performance of proposed methodology is validated through comparison with fuzzy variable structure methodology (FVSC). Simulation results signify good performance of fuel ratio in presence of different torque load and external disturbance.
Бесплатно
Design Intelligent Robust Partly Linear Term SMC for Robot Manipulator Systems
Статья научная
In this paper the development, modeling and high precision robust control of an electro-mechanical continuum robot manipulator is presented. In this paper main controller is a Sliding Mode Controller which modified by modified PD methodology based on the boundary derivative methodology. Parallel fuzzy logic theory is used to compensate the system dynamic uncertainty controller based on sliding mode theory. Sliding mode controller (SMC) is a significant nonlinear controller under condition of partly uncertain dynamic parameters of system. This controller is used to control of highly nonlinear systems especially for continuum robot manipulator, because this controller is robust and stable in presence of partly uncertainties. PD partly switching nonlinear SMC by modified PD boundary derivative method is used to achieve a stable tracking, while the parallel fuzzy-logic optimization added intelligence to the control system through an automatic tuning of the PD modified partly switching sliding mode methodology uncertainties. Adaptive methodology is used to on-line tuning the sliding surface slope and gain updating factor of this methodology. Simulation results demonstrated the validity of the Mamdani parallel fuzzy-optimization control with asymptotic and stable tracking at different position inputs. This compensation demonstrated a well synchronized control signal at different excitation conditions.
Бесплатно
Design Intelligent System Compensator to Computed Torque Control of Spherical Motor
Статья научная
Spherical three Degree-of- Freedom (DOF) is controlled by model-base fuzzy computed torque controller. The spherical motor has three revolute joints allowing the corresponding parts to move horizontally and vertically. When developing a controller using conventional control methodology (e.g., feedback linearization methodology), a design scheme has to be produced, usually based on a system’s dynamic model. The work outline in this research utilizes soft computing applied to new conventional controller to address these methodology issues. Computed torque controller (CTC) is influential nonlinear controllers to certain systems which this method is based on compute the required arm torque using nonlinear feedback control law. When all dynamic and physical parameters are known, CTC works superbly; practically a large amount of systems have uncertainties and fuzzy feedback Inference Engine (FIS) is used to reduce this kind of limitation. Fuzzy logic provides functional capability without the use of a system dynamic model and has the characteristics suitable for capturing the approximate, varying values found in a MATLAB based area. Based on this research model- base fuzzy computed torque controller applied to spherical motor is presented to have a stable and robust nonlinear controller and have a good result compared with conventional and pure fuzzy logic controllers.
Бесплатно
Design Minimum Rule-Base Fuzzy Inference Nonlinear Controller for Second Order Nonlinear System
Статья научная
This research is focused on proposed minimum rule base PID computed torque algorithms with application to continuum robot manipulator. The stability of the closed-loop system is proved mathematically based on the Lyapunov method. Classical Computed Torque Controller (CTC) is robust to control model partly uncertainties and external disturbances. This controller is one of the significant nonlinear methodologies; according to the nonlinear dynamic formulation. One of the main targets in this research is increase the robustness based on the artificial intelligence methodology. Classical computed torque control has difficulty in handling unstructured model uncertainties. One can overcome this problem by combining a computed torque controller and artificial intelligence (e.g. fuzzy logic). To approximate a time-varying nonlinear dynamic system, a fuzzy system requires a large amount of fuzzy rule base. This large number of fuzzy rules will cause a high computation load. To reduce the number of rule base this research is focused on the PD like fuzzy plus integral methodology. This method is applied to continuum robot manipulator to have the best performance.
Бесплатно