International Journal of Mathematical Sciences and Computing @ijmsc
Статьи журнала - International Journal of Mathematical Sciences and Computing
Все статьи: 264

Integration based on Monte Carlo Simulation
Статья научная
In this short article an attempt has been made to model Monte Carlo simulation to solve integration problems. The Monte Carlo method employs random sampling and the theory of big numbers to generate values that are very close to the integral's true solution. Python programming has been used to implement the developed algorithm for integration. The developed Python functions are tested with the help of six different integration examples which are difficult to solve analytically. It has been observed that that the Monte Carlo simulation has given results which are in good agreement with the exact analytical results.
Бесплатно

Interval-Valued Fuzzy Soft Subhemiring of Hemiring and it’s Application
Статья научная
In this paper, we initiated the idea of interval-valued fuzzy soft set (IVFSS) and a few results, Operations, definitions, and properties also a few properties and characteristics of IVFSSs and Prove some of theorem and we discuss a few examples uses of soft set in finding a selection taking problem. Also initiated the comparable measure of two IVFSSs and discuss with the Presentation of medical application problem.
Бесплатно

Labeling a kind of Cubic Graphs by Subgraph Embedding Method
Статья научная
Based on a problem raised by Gao et. al. (Bull. Malays. Math. Sci. Soc., 41 (2018) 443–453.), we construct a family of cubic graphs which are double-edge blow-up of ladder graphs. We determine the full friendly index sets of these cubic graphs by embedding labeling graph method. At the same time, the corresponding labeling graphs are provided.
Бесплатно

Low-tech steganography for covert operations
Статья научная
Text steganography, the art of concealing a secret text inside another innocuous text called the cover, is usually performed by insertion of whitespace, punctuation marks, misspelling words, or by arbitrarily capitalizing words or inserting synonyms, changing font-sizes & colors, etc. All of these have the disadvantage that they either arouse suspicion or are easily noticeable; and even lost if manually copied, i.e. handwritten. Furthermore, they are easily detectable by automated checkers. Still there are other methods which require a stego-key in order to decrypt the message. In covert intelligence operations, transmission of the stego-key may not be possible at all, more so when the message is urgent. Digital communications and Internet connectivity may also be lacking in certain situations, and the only mode of message passing available may be the exchange of handwritten text on paper; which effectively rules out text modifications like font-changes, whitespace insertion, etc. or any form of digital steganography like image/audio steganography. Finally, in almost all text-steganographic techniques, there is no provision to for the receiver to detect whether or not there is indeed any message embedded. This is very important in intelligence operations where a number of decoy text need to be sent with only one concealing the actual message. In this paper, we propose a new tool called STEGASSIST that can help the sender in generating the stego-text manually. It is a low-tech form of steganography that is especially suited to covert operations like espionage or under-cover journalism. In this method, the generated cover and the stego-text are identical, or in other words, there is no cover-text. Moreover, decryption does not require a stego-key, and the stego-text may be printed or even hand-written and sent via unreliable messengers, or published, without arousing any suspicion. Finally, the received stego-text can be checked by the receiver to detect whether or not there is any actual message embedded in it.
Бесплатно

Статья научная
Precise extrapolative mining and analysis of relevant dataset during or after any disease outbreak can assist the government, stake holders and relevant agencies in the health sector to make important decisions with respect to the disease outbreak control and management. While prior works has concentrated on non-stationary long term data, this work focuses on a short term non-stationary and relatively noisy data. Particularly, a distinctive nonparametric machine learning method based kernel-controlled probabilistic Gaussian process regression model has been proposed and employed to model and analyze Covid-19 pandemic data acquired over a period of approximately six weeks. To accomplish the aim, the MATLAB 2018a computational and machine learning environment was engaged to develop and perform the Gaussian process extrapolative analysis. The results displayed high scalability and optimal performance over the commonly used machine learning methods such as the Neural networks, Neural-Fuzzy networks, Random forest, Regression tree, Support Vector machines, K-nearest neighbor and Discriminant linear regression models. These results offer a solid foundation for conducting research on reliable prognostic estimations and analysis of contagious disease emergence intensity and spread.
Бесплатно

Machine learning applied to cervical cancer data
Статья научная
Cervical Cancer is one of the main reason of deaths in countries having a low capita income. It becomes quite complicated while examining a patient on basis of the result obtained from various doctor’s preferred test for any automated system to determine if the patient is positive with the cancer. There were 898 new cases of cervical cancer diagnosed in Australia in 2014. The risk of a woman being diagnosed by age 85 is 1 in 167. We will try to use machine learning algorithms and determine if the patient has cancer based on numerous factors available in the dataset. Predicting the presence of cervical cancer can help the diagnosis process to start at an earlier stage.
Бесплатно

Статья научная
In countries with arid and semi-arid climate such as Iran with water constraints, the use of groundwater resources is very important. There are various mathematical based methods and software packages for modelling groundwater resources. This paper uses groundwater flow problems to illustrate possible approaches for providing the environment of active teaching. Mathematical models supported by software applications facilitate the gaining of an insight into the physical behaviors by investigating a host of scenarios and events but they are poor in training critical thinking for encapsulating the hardcore mathematical equations describing the problems. Whilst software engineering has transformed the intellectual capitals accumulated between the 20th century and the middle of the 21th century into working tools, it has the drawback of encapsulating core mathematics away from common experience of the students and practitioners. This diminishes critical thinking in a world of increasing risks and ought to be taken a serious side effect of software engineering. This paper suggests a solution by building up a library of solvers using spreadsheets, with the effect that the encapsulated knowledge of building modelling solvers can permanently be brought to life in education with the active learning culture. Implementation was carried out in the same way for steady state flow as well as explicit 2D and 3D finite difference approximation for transient flow. This study raises concern about the encapsulated body of knowledge contributed to the emergence and the establishment of modelling software applications since 1980. This body of knowledge comprise a deeper understanding of equations of often partial differential equations describing physical problems, as well as their numerical transformation into systems of equations and their subsequent properly- and improperly posed systems of equation in terms of their assumptions and quality conditions. The outcome is the emergence of a cookbook mentality among the new breed of mathematical modelers without any critical thinking. The results revealed that spreadsheet can be used with the aid of the Solver function. This idea capitalized on the capabilities of the net-generation and opens up the possibility for the emergence of bottom-up open source modelling platforms.
Бесплатно

Статья научная
The research is concerned with the development of a mathematical model for predicting the rate of human happiness and to outline factors that influence human happiness. The model was optimized and observation about the model’s extreme value was made. The outcome of the optimization result showed that happiness has neither minimum nor maximum level that should be required in human. It means someone’s happiness could be close to 0% or even be up to 100%. Thereafter, the model was analysed and the collated real-life data were correlated with those of the model data (H model) using suitable statistical tools. The findings from the correlation result showed that the questionnaire result attained a 70% degree of correlation with the estimated model result (H model), and thus recommending the model as a standard measure for predicting the rate of human happiness.
Бесплатно

Статья научная
Mathematical modeling plays a crucial role in epidemiology by helping us understand how an epidemic unfolds under different conditions. Respiratory infectious diseases have emerged in our history, the virus has significantly impacted all aspects of life. In the absence of a definitive treatment, vaccination and Non-Pharmaceutical Interventions (NPIs) such as social distancing, handwashing, wearing face masks, quarantine, isolation, and contact tracing have been essential in controlling its spread. This study develops a deterministic mathematical model to explore the dynamics of respiratory infectious diseases under key mitigation measures, including vaccination, face mask usage, quarantine, and isolation. The system of Ordinary Differential Equations (ODEs) is solved using Wolfram Mathematica, while the Next Generation Matrix (NGM) method is employed to determine the basic reproduction number. Stability analysis is conducted using the Jacobian matrix, and numerical simulations are carried out in Python using Jupyter Notebook. The analysis indicates that the model has a disease-free equilibrium (DFE), which is locally asymptotically stable when the basic reproduction number is less than one. This suggests that respiratory infectious diseases can be effectively controlled if vaccination and NPIs are implemented together. Sensitivity analysis highlights that the most critical factors for eradicating respiratory infectious diseases are the vaccine coverage rate (the proportion of susceptible individuals vaccinated) and vaccine efficacy.
Бесплатно

Статья научная
At the end of 2019 the novel coronavirus disease (COVID-19) was declared as a major health hazard by the world health organization (WHO) and the only available way of stopping this threat was via non-pharmaceutical approach. Most authors have studied COVID-19 transmission dynamics using mathematical modeling by involving the basic (major) compartments. In this study we have formulated a mathematical model for the transmission dynamics of COVID-19 which incorporates almost all possible scenarios at present. We have also analyzed the impact of prevention and control strategies. The model has satisfied all the basic properties that infectious disease model should fulfill; Boundedness, positivity of its solutions, stability analysis, epidemic equilibrium point, basic reproduction number and local stability of the disease free equilibrium. We introduced a self-protection parameter, m to analyze the impact of physical distancing, staying at home, using masks, washing hands and so on. The impact of isolation and quarantine has been analyzed and their effects on the number of Exposed, infected and dead people were clearly discussed. In addition to these, the effects of symptomatic and asymptomatic individuals on the value of basic reproduction number have been examined. The numerical simulations of this study indicate that the government should increase isolation, quarantine and self-protection rates. Additionally to minimize the contact rate between susceptible and asymptotic individuals, self-protection at all cost and everywhere has to be done, so that both symptomatic and importantly asymptomatic individuals stop transmitting the virus.
Бесплатно

Matrix Approach to Rough Sets Based on Tolerance Relation
Статья научная
There are many complex issues with incomplete data to make decisions in the field of computer science. These issues can be resolved with the aid of mathematical instruments. When dealing with incomplete data, rough set theory is a useful technique. In the classical rough set theory the information granules are equivalence classes. However, in real life scenario tolerance relations play a major role. By employing rough sets with Maximal Compatibility Blocks (MCBs) rather than equivalence classes, we were able to handle the challenges in this research with ease. A novel approach to define matrices on MCBs and operations on them is proposed. Additionally, applied the rough matrix approach to locate a consistent block related to any set in the universal set.
Бесплатно

Means of the Semantic Search Personification on base of Ontological Approach
Статья научная
The main trends of information retrieval deal with its personification and semantization are analyzed. Sources of knowledge about main subjects and objects of the search process are considered. Ontological model of interaction between the Web information resources and information consumers is proposed as a base of the search personification. Methods of development, improvement and usage of this model are defined. User characteristics are supplemented with sociopsychophysiological properties and ontologically personalized readability criteria. Software realization of semantic search on base of this ontological approach is described.
Бесплатно

Mining maximal subspace clusters to deal with inter-subspace density divergence
Статья научная
In general, subspace clustering algorithms identify enormously large number of subspace clusters which may possibly involve redundant clusters. This paper presents Dynamic Epsilon based Maximal Subspace Clustering Algorithm (DEMSC) that handles both redundancy and inter-subspace density divergence, a phenomenon in density based subspace clustering. The proposed algorithm aims to mine maximal and non-redundant subspace clusters. A maximal subspace cluster is defined by a group of similar data objects that share maximal number of attributes. The DEMSC algorithm consists of four steps. In the first step, data points are assigned with random unique positive integers called labels. In the second step, dense units are identified based on the density notion using proposed dynamically computed epsilon-radius specific to each subspace separately and user specified input parameter minimum points, τ. In the third step, sum of the labels of each data object forming the dense unit is calculated to compute its signature and is hashed into the hash table. Finally, if a dense unit of a particular subspace collides with that of the other subspace in the hash table, then both the dense units exists with high probability in the subspace formed by combining the colliding subspaces. With this approach efficient maximal subspace clusters which are non-redundant are identified and outperforms the existing algorithms in terms of cluster quality and number of the resulted subspace clusters when experimented on different benchmark datasets.
Бесплатно

Статья научная
The socio-economic evolution of populations has in recent decades a rapid and multiple changes, including dietary habits that have been characterized by the consumption of fresh products out of season and widely available throughout the year. Culture under shelters of fruit, vegetable and flower species developed from the classical to the greenhouse agro - industrial, currently known for its modernity and high level of automation (heating, misting, of conditioning, control, regulation and control, supervisor of computer etc ...). new techniques have emerged, including the use of control devices and regulating climate variables in a greenhouse (temperature, humidity, CO2 concentration etc ...) to the exploitation of artificial intelligence such as neural networks and / or fuzzy logic. Currently the climate computer offers many benefits and solves problems related to the regulation, monitoring and controls. Greenhouse growers remain vigilant and attentive, facing this technological development. they ensure competitiveness and optimize their investments / production cost which continues to grow. The application of artificial intelligence in the industry known for considerable growth, which is not the case in the field of agricultural greenhouses, where enforcement remains timid. it is from this fact, we undertake research work in this area and conduct a simulation based on meteorological data through MATLAB Simulink to finally analyze the thermal behavior - greenhouse microclimate energy.
Бесплатно

Modelling Taylor's Table Method for Numerical Differentiation in Python
Статья научная
In this article, an attempt has been made to explain and model the Taylor table method in Python. A step-by-step algorithm has been developed, and the methodology has been presented for programming. The developed TT_method() function has been tested with the help of four problems, and accurate results have been obtained. The developed function can handle any number of stencils and is capable of producing the results instantaneously. This will eliminate the task of hand calculations and the use can directly focus on the problem solving rather than working hours to descretize the problem.
Бесплатно

Статья научная
This study presents the modelling of impacts of climate change on water resources. Mtera dam in Tanzania was taken as a case study. Data for climate variables on four stations were obtained from Tanzania Meteorological Agency (TMA) while data for water level were obtained from Rufiji Basin Development Authority (RUBADA). The study aimed at doing regression analysis on all stations to analyze the impacts of change in climate variables on water level. Results show that rainfall was significant predictor of water level at Iringa and Dodoma while temperature and sunshine were significant at Mbeya station. Change in climate variables accounted for 37% of the fluctuations of water level in the dam. It was recommended that TANESCO should construct small dams on upper side of Mtera dam to harvest rain water during rainy season. In long run TANESCO should invest into alternative sources of energy.
Бесплатно

Modification on AES-GCM to increment ciphertext randomness
Статья научная
Today, there are many cryptographic algorithms that are designed to maintain the data confidentiality, from these algorithms is AES. In AES-GCM, the key in addition to the IV are used to encrypt the plaintext to obtain the ciphertext instead of just the key in the traditional AES. The Use of the IV with the key in order to gain different ciphertext for the same plaintext that was encrypted more than ones, with the same key. In this paper, the mechanism of change the IV each time in AES-GCM was modified to get more randomness in the ciphertext, thus increase the difficulty of breaking the encrypted text through analysis to obtain the original text. NIST statistical function were used to measure the randomness ratio in the encrypted text before and after modification, where there was a clear rise in the randomness ratio in the encoded text which obtained by using the modified algorithm against ciphertext by using the normal AES_GCM.
Бесплатно

Modified DES using Different Keystreams Based On Primitive Pythagorean Triples
Статья научная
Symmetric-key encryption is a traditional form of cryptography, in which a single key is used to encrypt and decrypt a message. In symmetric–key algorithm before any encrypted message is being transmitted, the sender and receiver must know the key value in advance. There are several drawbacks in symmetric-key algorithms. In some algorithms, the size of the key should be same as the size of the original plaintext and maintaining and remembering such a key is very difficult. Further, in symmetric-key algorithms, several round has to be performed to produce the ciphertext and perhaps the same key is used in each round which results in subkey generated from the current round is fully depending on the previous round. To avoid these, a novel approach in generating the key from the keystream for any symmetric-key algorithms using the Primitive Pythagorean Triples(PPT) has been proposed in this paper. The main advantage of this method is that the key value generated from the keystream is chosen by both the sender and the receiver. Further, the size of the key sequence is not limited but its size is arbitrary in length. Since, the keystream generated is random, no need to remember such keys by both the sender and the receiver.
Бесплатно

Статья научная
When we are given a data set where in based upon the values and or characteristics of attributes each data point is assigned a class, it is known as classification. In machine learning a very simple and powerful tool to do this is the k-Nearest Neighbor (kNN) algorithm. It is based on the concept that the data points of a particular class are neighbors of each other. For a given test data or an unknown data, to find the class to which it is the neighbor one measures in kNN the Euclidean distances of the test data or the unknown data from all the data points of all the classes in the training data. Then out of the k nearest distances, where k is any number greater than or equal to 1, the class to which the test data or unknown data is the nearest most number of times is the class assigned to the test data or unknown data. In this paper, I propose a variation of kNN, which I call the ANN method (Alternative Nearest Neighbor) to distinguish it from kNN. The striking feature of ANN that makes it different from kNN is its definition of neighbor. In ANN the class from whose data points the maximum Euclidean distance of the unknown data is less than or equal to the maximum Euclidean distance between all the training data points of the class, is the class to which the unknown data is neighbor. It follows, henceforth, naturally that ANN gives a unique solution to each unknown data. Where as , in kNN the solution may vary depending on the value of the number of nearest neighbors k. So, in kNN, as k is varied the performance may vary too. But this is not the case in ANN, its performance for a particular training data is unique. For the training data [1] considered in this paper, the ANN gives 100% accurate result.
Бесплатно

Статья научная
Water loss in water distribution systems (WDS) is a serious problem in Tanzania and the third world countries at large. A lot of water is lost on its way before reaching the consumers. This causes a shortage of water supply which leads to loss of revenues of the concerned water authorities. The control or reduction of water loss in the WDS is closely dependent on the commitment of the decision-makers and on the strategies and budget, they set for that purpose. This paper presents a combined model of Multi-Criteria Decision Making (MCDM) and Numerical optimization techniques which may help decision-makers to prioritize and select the best strategies to be used in the management of water loss in the WDS at Moshi Urban Water Supply and Sanitation Authority (MUWSA), Tanzania. The Multi-Criteria Decision Making family methods namely the Multi-Attribute Value Theory (MAVT), Simple Multi-Attribute Rating Technique Exploiting Ranks (SMARTER), and Complex Proportional Assessment (COPRAS) were used to evaluate and prioritize the strategies, whereas the Integer Linear Programming (ILP) technique a numerical optimization technique was used to select the best strategies or alternatives to be employed in water loss management. The results show that the most preferable alternative is replacement of dilapidated pipes while the least preferable alternative is network zoning. The model selects thirteen out of sixteen alternatives, which cost 97% (TZS 235.71 million) of the total budgets set by the water authority to form a portfolio of the best alternatives for water loss management. Furthermore, the model showed robustness as the selected portfolio of alternatives remained the same even when the weights of the evaluation criteria changed.
Бесплатно