Stock market forecast models

Stock market forecast models

By: Redgard Date of post: 25.05.2017

The NCBI web site requires JavaScript to function. Conceived and designed the experiments: In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis 2D 2 PCA and a Radial Basis Function Neural Network RBFNN to forecast stock market behavior.

First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, 2D 2 PCA is utilized to reduce the dimension of the data and extract its intrinsic features.

Finally, an RBFNN accepts the data processed by 2D 2 PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis PCA and independent component analysis ICA.

The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. The stock market is quite attractive if its behavior can be predicted; however, forecasting the stock market index is regarded as a difficult task due to its random walk characteristic. According to the Efficient Market Hypothesis [ 2 ], changes in stock market prices are determined by new information, but because the new information is unpredictable, the stock market price is also unpredictable.

Some researchers argue that the stock market can be predicted over the short term, as reported in studies by Los [ 3 ] and Haugen [ 4 ]. Guo [ 5 ] indicated that the Chinese stock market has been gradually acting as the barometer of the economy since The Shanghai stock market opened in , which plays an important role in Chinese economic development, so an increasing number of forecasting models are being developed to predict Shanghai stock market trends.

These earlier studies have been reported in Cao et al. Over the past two decades, many models based on soft computing have been proposed [ 11 — 16 ]. In the most existing prediction approaches, there have been numerous studies using RBFNN for stock price prediction.

RBFNN was first used to solve the interpolation problem of fitting a curve exactly through a set of points [ 17 ]. A large number of successful applications have shown that RBFNN can be useful techniques for stock price forecasting due to their ability to approximate any continuous function with arbitrary precision. Since RBFNN has fast convergence, but a powerful nonlinear problem-solving ability.

It motivates this study of utilizing RBFNNN for stock price prediction. When using RBFNN for stock prices forecasting, the observed original values of prediction variables are usually directly used to build prediction models. One of the key problems is the inherent noise of original values affecting the prediction performance. Many studies on time series analysis have suggested that raw data preprocessing is useful and necessary for improving system performance and model generalization to unseen data.

For stock market forecasting, as new data is obtained, if the predictive model can be refined to account for it, then the model should be better adapted for the new data, and its predictive accuracy should be improved.

Thus, especially for predicting the stock market, with its inherent volatility, the predictive model should be dynamically learned on-line. In this learning context, the dimensionality of the raw data play an important role in improving the performance and reducing the computational complexity needed to learn the predictive model. In this case, many hybrid system methods were proposed to improve the performance of stock market forecasting systems [ 21 — 23 ].

These existing methods usually contain two stages, the first stage is feature extraction to remove the noise, the second stage is a predictor to forecast the stock price. This indicates that more attention should be paid to the preprocessing methods used in stock market forecasting. In particular, more effective dimension reduction methods should be introduced to improve the performance of the forecasting model. Common approaches include data normalization, indicator reduction, and PCA [ 24 ], a very popular subspace analysis method which is successfully applied in many domains for dimension reduction.

Tsai [ 27 ] use PCA as a feature selection method of stock prediction. Another well-known approach is ICA. Huang [ 29 ], Guo [ 30 ] and Yeh [ 31 ] proposed a hybrid model by combining ICA and SVR in conducting time series prediction tasks.

In these methods, PCA or ICA were used as preprocessing tool before building a stock prediction model. PCA or ICA is suitable when the format of raw data is a vector with lower dimension. However, this condition is often not satisfied with the stock prediction. In multivariable prediction systems, there is a strong correlation between the variables, and the initial format of the raw data is a tensor. As feature extraction tools, both PCA and ICA need to transform the tensor into a vector, which contains two drawbacks.

One is it requires prohibitive computational complexity, the other is PCA and ICA break the correlation residing in the raw data. In this work, first, a sliding window and 36 technique variables were used to obtain a multidimensional representation of the forecasting variable. Second, 2D 2 PCA was applied to extract features from the predictor variables. Third, the features were used as the inputs of RBFNN.

We attach importance to the influence of dimension reduction on the performance of the forecasting system.

Compare with PCA and ICA, 2D 2 PCA, as demonstrated in this paper, provides both computationally efficient preprocessing and more powerful feature extraction, leading to more accurate forecasting.

In previous studies, different stock markets have been modeled. Some scholars have focused on stocks, while others paid more attention to the stock market index, which represents the movement average of many individual stocks. Compared with a single stock, the stock market index remains relatively stable in reflecting overall market movement. The Shanghai stock market index collected from the China stock market is used to illustrate the proposed two-stage model.

The prediction performance of the proposed approach is compared with other alternative approaches: The model comparison shows that the proposed approach gets a better performance than the other alternative models. The rest of this paper is structured as follows. In Section 2, we give a brief overview of 2D 2 PCA and RBFNN. The proposed model is presented in Section 3. In Section 4, experiments are conducted to evaluate the performance of the proposed model.

The conclusion is given in Section 5. In this section, we briefly review the basic concepts about the underlying technologies used in the study. PCA is a well-known dimension reduction method used in pattern recognition and signal processing. The total scatter matrix of all samples is defined as follows:.

stock market forecast models

The principal vector of PCA is the eigenvector corresponding to the maximum eigenvalue of C. Generally, it is not enough to have only one optimal projection vector, so the discriminant vector v d is composed of the orthogonal eigenvectors of C corresponding to the first d largest eigenvalues. The resulting feature vector for x i is y i obtained by projecting x i into the subspace v d , i. From the above, we can see that there are some disadvantages of PCA.

First, the sample is transformed from a 2D to a long 1D vector, which breaks the spatial structure of the original matrix. Retaining this 2D structure may be critically important, when transforming the data to extract features.

Second, and perhaps more importantly, due to the high dimension, it may be difficult to accurately evaluate the covariance matrix C , given a finite training set. Finally, the processing time taken by PCA may be prohibitive. To overcome these problems, Yang et al. For this reason, the computational complexity of 2DPCA is far less than that of PCA.

At the same time, because the covariance matrix is built up by A i , the information of the spatial structure is retained in the processing. However, the main disadvantage of 2DPCA is that the feature values are often much larger than those of PCA. Furthermore, some studies indicate that 2DPCA is essentially working in the column direction of the matrix, i.

Zhang et al proposed a Two-directional Two-dimensional method called 2D 2 PCA to address this problem. Suppose the projection matrix V d has been obtained. Then y i is transposed to yield y i T. After that, regard y i T as the new training sample on which to carry out 2DPCA again called alternatve 2DPCA , yielding the feature matrix z i , If p eigenvectors are selected in alternative 2DPCA to form the projecting subspace W p , the 2D 2 PCA algorithm can be described as follows:.

From Formula 5 we can see that the main idea of 2D 2 PCA is that the original matrix A i is projected into a 2DPCA subspace to extract the row direction feature y i , then transposed to yield y i T , and the alternative 2DPCA is utilized to extract the column direction feature.

Thus, the feature matrix z i contains both the row direction and the column direction feature information from the original matrix.

A Stock Market Forecasting Model Combining Two-Directional Two-Dimensional Principal Component Analysis and Radial Basis Function Neural Network

The feature set obtained from 2DPCA is generally much higher dimensional than that of 2D 2 PCA. So from the standpoint of dimension reduction, the performance of 2DPCA is much worse than that of 2D 2 PCA. For on-line stock forecasting systems, if 2DPCA rather than 2D 2 PCA is used as a tool to preprocess raw data, then the training complexity of the model will be drastically increased. For more details please refer to [ 32 ].

The idea of RBFNN [ 30 ] derives from the theory of function approximation. The RBFNN has a three-layered structure: The input layer collects and feeds the input data to each node of the hidden layer. The hidden nodes implement a set of radial basis functions which are often chosen to be Gaussian functions.

The output layer implements a weighted linear summation function to sum the outputs of the hidden layer to yield the prediction value which may be thresholded, if a binary decision is sought. The RBFNN architecture is shown in Fig 1. In Fig 1 , x and f x are the input and output of the RBFNN network respectively. W i is the output weight between hidden unit i and the output unit. When the Gaussian function is used, the common form of f x for an RBFNN is as follows:.

The network training is divided into two steps: In this study, we have two goals: The Shanghai stock index is used for testing the proposed method and comparing with the PCA and ICA dimension reduction approach. There are two important factors regarding the input data. The first concerns the variables from the price history for each day. In previous studies, many technical variables have been proposed as the features to predict the trend of the stock market, such as closing price, moving average line, Williams index and so on.

Different models apply different variables and there is no unified framework for the selection of input variables. For example, Teixeira et al [ 34 ] and Ettes [ 12 ] selected only two input variables. This is quite different from Zorin et al. For our study, we believe too few variables will fail to represent the intrinsic features of the stock market, and too many variables will lead to computational and potentially model generalization difficulties.

In [ 36 ], 22 variables were selected as input to the prediction model and satisfactory results were achieved. As the stock market price is determined by various economic and non-economic factors, it is difficult to predict stock market trends using only a few factors. For this reason, with reference to [ 34 ] and [ 37 ], we selected 36 variables for each day as the input to the prediction model, as reported in Table 1.

Other variables are calculated based on x t , x h t , x l t and x o t , and the description and formulae of the variables are displayed in Table 1. The principle that we used to select the variables is three folds: The first part is the basic data of the stock market, which can be directly obtained from the stock market database.

These variables include I 1 to I 4. The second part is technical variables which are commonly used by some investors. For example, Moving average, Williams index, and so on. These variables include I 5 to I The third part is the movement of basic data or technical variables, which represent the trend of changes in the data.

These variables include I 26 to I The second key factor of the input data is the length of the sliding window. Thus, it is not reasonable to select data from only one day or several days to predict the next day's price.

Some studies have also indicated that the near daily data had a bigger influence over the future price than data furthered in the past. In this study, 20 days are selected as the length of the sliding window for each variable. Different from the previous study, both historical data and the related technical variables are taken into the count in this approach. The raw input data is a natural tensor containing the correlation between technical variables.

So it is important retaining this tensor structure on the feature extraction step. Obviously, it is difficult for conventional forecasting models to accept this huge data as input, thus the first step is to reduce the dimension of the raw data before sending it to the forecasting model. In this study, we provide two kinds of output of the forecasting system.

To predict future trends of a stock price, the system can be built up with the following four components: Therefore, a research forecasting model Fig 2 is presented to evaluate the performance of the proposed model.

The data processing begins with the selection of data from the stock market database and ends with the prediction result of the closing price or movement of the closing price. After the raw data are prepared, they are sent to the second module of the system to compute the variables used as the raw features of the forecasting model. Based on Table 1 , the first four variables I 1 to I 4 are the raw data, x o t , x h t , x l t and x t , with the last 32 I 5 to I 36 technical variables calculated based on the given formulae.

A sliding window is applied to the entire data set to extract the input raw data used by the forecasting model. As we have mentioned in 3. This process can be described as showed in Fig 3. The gray block represents the input data of the forecasting model which includes 20 trading days' data for the 36 technical variables. As the window is sliding, N input data are obtained from the trading data set.

The white block represents the target output data, which is the next day's closing price or the price movement. A similar method was also reported in [ 34 ].

The function of the third module aims to reduce the dimension of the input data. Dimension reduction is a key step in signal processing and pattern recognition systems. It aims to filter out the information redundancy and extract the intrinsic features from high dimensional data. In this study, 36 technical variables were selected, and each is measured for 20 previous trading days; we note that the dimensionality of the input data is high, and the data likely will generally have some redundancy.

In order to decrease computational complexity both of the system design and of the forecasting, it is both acceptable from an information loss standpoint and necessary to reduce the data dimensionality. We propose to use 2D 2 PCA to extract features from the original data. As discussed in previous section, the size of z i is much smaller than that of A i , and thus z i is chosen to be the input of the RBFNN.

The last module of the system is the RBFNN, which accepts z i from the 2D 2 PCA dimension reduction module and forecasts the next day's price or the price movement. The training set is used to learn the weights W 1 and W 2 of the RBFNN. After the training is completed, the test set is used to evaluate the performance of the forecasting model.

The input variables are not usually within the range [0 1] in the training set after dimension reduction; each data point is thus scaled to be within this range by Eq The architecture of the RBFNN is as follows: The first layer has Radial basis transfer function RABAS neurons, and calculates its weighted input with the Euclidean distance weight function, and calculates a layer's net input by combining its weighted inputs.

The second layer has Linear transfer function PURELINE neurons, and calculates its weighted input with the Dot product weight function, and its net input with sum net input function. The training method used for the RBFNN is the Least Squares LS algorithm [ 38 ]. The following steps are repeated until the network's mean squared error falls below the GOAL or the maximum number of neurons are reached: In this work, Gaussian function is adopted because it is the most widely used transformation function and has performed well in most forecasting cases [ 39 ].

The performance advantage of the proposed model which will be experimentally investigated in the next section may lie on the following reasons. First, traditional forecasting models may be classified as auto-regressive and multi-variable models. The former is based on the idea that all the related factors can be reflected in the closing price of the stock, so the closing price history decides the future trend. On the other hand, the latter model holds that some technical variables are very useful for making predictions, such as Moving average, Relative strength index, Oscillator, Williams index and so on.

In our proposed model, the input data of the model is a matrix, with columns representing the technical variables and, rows representing the historical data for the technical variables, so the influence on the stock market price from both the technical variables and the historical data are taken into account.

Second, from the formulae in Table 1 , we can see that the technical variables are correlated with each other. It is obvious that historical data within the sliding window are also correlated. So the input data have correlation both in the row direction and in the column direction. In the proposed model, 2D 2 PCA is carried out to reduce the dimension of the raw data. The advantage of 2D 2 PCA is its extraction of useful information by removing the correlation from both the row direction and the column direction.

This fact suggests the potential improvement in performance, compared to models that do not decorrelate in both directions. The other key issue is the algorithm complexity of the methods. With respect to ICA, it is common to use PCA to whiten the raw data before ICA is calculated.

So the complexity of ICA is much bigger than PCA. In this case, compared to ICA and PCA, 2D 2 PCA accelerates the computational speed of forecasting by more efficient calculation. Last but not least, RBFNN is used as the predictor. Compared to traditional neural networks, RBFNN has several distinct characteristics [ 42 ]. Firstly, it has the best approximation characteristic and no local minimum problem.

Second, it has a strong robust and adaptive capability which can help it to give better forecasting results. Furthermore, it has fast convergence speed and good stability. For these reasons, RBFNN is widely used in pattern recognition and time series prediction.

Since stock market data has random walk characteristics, stock market forecasting is a nonlinear regression problem. The characteristics of RBFNN are quite suitable to deal with such problems.

In order to verify the effectiveness of the proposed model for forecasting, the Shanghai stock market index collected from 4 Jan. The overall data include trading days' data which are split into two parts: The former, which includes trading days' data, is used as the training set, and the latter, which includes trading days' data, is used as the test set. The daily Shanghai stock market index closing prices are shown in Fig 4.

As discussed in the last section, a sliding window is employed to build up the raw data of the training set, with training samples obtained from days of trading data. The target output of a training sample is the closing price for the next day. Experiments are performed on a PC with 2. There are two purposes in this experiment. In the PCA models, the dimension of the input data is determined by Eq 3.

Here, the fixed-point algorithm [ 36 ] is carried out to implement ICA, and a method based on amplitude of the weight vector is used to determine the selection of the ICA subspace [ 43 ]. The training parameters for the RBFNN are: Mean squared error goal is 0; SPREAD is selected through repeated and numerous experiments according to performance considerations. The maximum number of neurons was set equal to the number of training samples. In our experiment, the architecture of BPNN was chosen to be ; that is, the input layer has 7 nodes, the hidden layer has 10 nodes and the output layer has 1 node.

The hidden nodes were determined through trial and error because the BPNN does not have a general rule for selecting the optimal number of hidden nodes. The number of hidden layer and output layer transform functions were chosen to be the Hyperbolic tangent sigmoid transfer function and Linear transfer function respectively.

The maximum training step epoch was , the training error goal was 0. Clearly, the dimension of the 2D 2 PCA model is equal to or even smaller than the PCA model and ICA model under the three different conditions.

To measure the performance of the proposed model, 12 performance indicators were selected. The descriptions and formulae of these indicators are described in Table 2. In these indicators, PCD, R 2 , r 1 , r 2 , MAPE, HR, TR, RMSE and SMAPE are used to measure whether the predicted value is similar to the actual value. If PCD, R 2 and r 1 are big, it means that the predicted result is similar to the actual value.

If MAPE, RMSE and SMAPE are small, this also indicates that the predicted result is close to the actual value. HR is used to measure the prediction accuracy of the stock market trend. TR and r 2 are applied to evaluate the return of different models. ET, TT and ST are used to test the effective computation time of the proposed model; the total running time of the proposed model is the sum of ET, TT and ST.

Fig 5 presents the fitting curve of prediction values and actual values, Fig 6 displays the return under four conditions. In the Fig 5 , the blue curve represents the actual data and the red curve represents the prediction. From Fig 5 , we can see that the prediction results of the proposed model are much closer to the actual data than the other models.

The performance of the three models changes with the dimension of the input data—the prediction performance improves with increasing dimension. Observing the graphs of returns of Fig 6 , different models are tried as predictor based on return to find out which model gives out the best result. In the Fig 6 , if the point of return is above zero, it means the return is positive and the investor can profit. More point above zero, better performance the model has.

The total returns of different models are listed in Table 3. Fig 7 shows the training process. The results show that, for BPNN, the data processed by 2D 2 PCA has better convergence performance than that of PCA and ICA.

Table 3 and Table 4 compare the forecasting results of the 2D 2 PCA model with the PCA model, ICA model and raw data model. From Table 3 , almost all the measure indicators of the 2D 2 PCA model are superior to those of the PCA and ICA models.

In the first group experiment especially, the 2D 2 PCA model shows much better performance than the other models. For example, the indicators PCD, R 2 and HR of the 2D 2 PCA model reach 0. This indicates that dimension reduction play an important role in accuracy of the predictive model. The reason is that the dimensionality of the features is too small to extract enough useful information from the raw data. Another key issue of note is that the hit rate is not fully consistent with the returns from the figures in Table 3.

However, the returns of the former are lower than that of the latter. The reason for this is that hit rate only represents the frequency of the forecasting accuracy but does not take into account the fluctuation level of the stock market [ 14 ]. So when the actual price of the stock fluctuates drastically, both hit rate and the return should be considered to evaluate the performance of the forecasting model.

Table 4 shows the running time of the proposed model. In the four group experiments, the dimension reduction time of the 2D 2 PCA models is much less than that of the PCA models; for instance, the 2D 2 PCA model needs 2.

Because the dimension of the 2D 2 PCA model is quite close to the PCA model and ICA model, the RBFNN training time and the simulation time are close for the three types of model. It is also found that, in the four group experiments, because the convergence speed of the 2D 2 PCA model is much faster than the other two models, the feature extraction time of the former is significantly less than the latter.

In regard to total running time, due to the contribution of dimension reduction time, the 2D 2 PCA model is also more powerful than the PCA model. This investigation evaluates 36 technical variables for forecasting stock market short-term trends, and utilizes 2D 2 PCA to reduce the dimension of the input data; RBFNN is combined with 2D 2 PCA to build a forecasting model.

The proposed approach with RBFNN models provides strong robust and adaptive capability in predicting the daily closing price, so it was able to cope with the fluctuation of stock market values and yielded good prediction accuracy. Furthermore, due to the low complexity of 2D 2 PCA for dimension reduction and the high convergence speed of the associated regression model learning, the proposed model shows better computational efficiency in stock market forecasting than its alternatives. Overall, the results presented in this study have confirmed that the proposed model provides a promising method for stock forecasting.

Although the proposed model provides many advantages, it also has minor weakness. While the model obtains high accuracy forecasting at low computational cost, the input dimension of the RBFNN is still high. To alleviate this problem, a possible way is to select more efficient technical variables. However, this is an open problem.

The authors also gratefully acknowledge the helpful comments and suggestions of the editor and reviewers, which have improved the presentation. This work is partially supported by the National Natural Science Foundation of China grant no. National Center for Biotechnology Information , U. National Library of Medicine Rockville Pike , Bethesda MD , USA.

NCBI Skip to main content Skip to navigation Resources How To About NCBI Accesskeys My NCBI Sign in to NCBI Sign Out. PMC US National Library of Medicine National Institutes of Health.

Search database PMC All Databases Assembly Biocollections BioProject BioSample BioSystems Books ClinVar Clone Conserved Domains dbGaP dbVar EST Gene Genome GEO DataSets GEO Profiles GSS GTR HomoloGene MedGen MeSH NCBI Web Site NLM Catalog Nucleotide OMIM PMC PopSet Probe Protein Protein Clusters PubChem BioAssay PubChem Compound PubChem Substance PubMed PubMed Health SNP Sparcle SRA Structure Taxonomy ToolKit ToolKitAll ToolKitBook ToolKitBookgh UniGene Search term.

Journal List PLoS One v. Published online Apr 7. Boris Podobnik, Academic Editor. The authors have declared that no competing interests exist. Received Sep 25; Accepted Feb This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.

This article has been cited by other articles in PMC. Abstract In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis 2D 2 PCA and a Radial Basis Function Neural Network RBFNN to forecast stock market behavior.

Research Methodology In this section, we briefly review the basic concepts about the underlying technologies used in the study. PCA PCA is a well-known dimension reduction method used in pattern recognition and signal processing. The total scatter matrix of all samples is defined as follows: RBFNN The idea of RBFNN [ 30 ] derives from the theory of function approximation.

Proposed Forecasting Model The input and output of the system In this study, we have two goals: The overall framework of the model To predict future trends of a stock price, the system can be built up with the following four components: Data collecting A sliding window is applied to the entire data set to extract the input raw data used by the forecasting model.

Dimension reduction The function of the third module aims to reduce the dimension of the input data. Forecasting process The last module of the system is the RBFNN, which accepts z i from the 2D 2 PCA dimension reduction module and forecasts the next day's price or the price movement. The input variables are not usually within the range [0 1] in the training set after dimension reduction; each data point is thus scaled to be within this range by Eq 8: Experimental Results and Analysis Data preparation In order to verify the effectiveness of the proposed model for forecasting, the Shanghai stock market index collected from 4 Jan.

Experiment design There are two purposes in this experiment. Fitting curve of prediction results and the actual data. Nine measure indicators of ICA, PCA and 2D 2 PCA associated with RBFNN and BPNN under different dimension. The running time of ICA, PCA and 2D 2 PCA associated with RBFNN and BPNN under different dimension.

Conclusion This investigation evaluates 36 technical variables for forecasting stock market short-term trends, and utilizes 2D 2 PCA to reduce the dimension of the input data; RBFNN is combined with 2D 2 PCA to build a forecasting model.

Supporting Information S1 Data Data of experiment. XLS Click here for additional data file.

Stock Forecast Based On a Predictive Algorithm | I Know First |Stock Market Forecast: Creating a Model for Chaos Mapping and Predictions

Acknowledgments The authors also gratefully acknowledge the helpful comments and suggestions of the editor and reviewers, which have improved the presentation.

Funding Statement This work is partially supported by the National Natural Science Foundation of China grant no. Data Availability All relevant data are within the paper and its Supporting Information files. Lu CJ Integrating independent component analysis-based denoising scheme with neural network for stock price prediction.

Expert Systems with Applications. Fama EF Efficient capital markets: A review of theory and empirical work. Journal of Finance 25 2: Los CA Nonparametric efficiency testing of Asian markets using weekly data.

Advances in Econometrics Haugen RA The new finance: The case against efficient markets. Prentice Hall; New Jersey. Guo K, Zhou WX, Cheng SW Economy barometer analysis of China stock market—A dynamic analysis based on the thermal optimal path method , Journal of Management Sciences in China Guan Li Ke Xue Xue Bao 15 1: Cao Q, Leggio KB, Schniederjans MJ A comparison between Fama and Frenchs model and artificial neural networks in predicting the Chinese Stock Market.

Computers and Operations Research 32 Yang YW, Liu GZ, Zhang ZP Stock market trend prediction based on neural networks Multiresolution Analysis and Dynamical Reconstruction. Zhang D, Jiang Q, Li X Application of neural networks in financial data mining. Proceedings of International Conference on Computational Intelligence, Istanbul, pp — Dai WS, Wu JY, Lu CH Combining nonlinear independent component analysis and neural network for the prediction of Asian stock market indexes.

Expert systems with applications. Guo ZQ, Wang HQ, Liu Q, Yang J A Feature Fusion Based Forecasting Model for Financial Time Series. PLoS ONE 9 6: White H Economic prediction using neural networks: A case of IBM daily stock returns International Conference on Neural Networks. IEEE Computer Society Press, Vol 2 , San Diego, pp — Ettes D Trading the stock markets using genetic fuzzy modelling.

Proceedings of Conference on Computational Intelligence for Financial Engineering, New York, pp 22— Lam SS A genetic fuzzy expert system for stock market timing. Proceedings of the IEEE Conference on Evolutionary Computation , Vol 1 , Soul, pp — Guo ZQ,Wang HQ,Liu Q Financial time series forecasting using LPP and SVM optimized by PSO. Huang W, Nakamori Y, Wang SY Forecasg stock market movement direction with support vector machine. Computer and Operations Research. Hassan MR, Nath B Stock market forecasting using hidden markov model: Proceedings of 5th international conference on intelligent system design and application, Wroclaw, pp — Powell MJD Radial basis functions for multivariable interpolation: Clarendon Press; New York, NY, USA.

Versace M, Bhatt R, Hinds O, Shiffer M Predicting the exchange traded fund DIA with a combination of genetic algorithms and neural networks. Expert Systems with Applications 27 3: Wang XL, Sun CW Solve Fractal Dimension of Shanghai Stock Market by RBF Neural Networks.

Sun B, Li TK. Proceedings of IEEE the 17th International Conference on Industrial Engineering and Engineering Management, Xiamen, pp — Ao SI A hybrid neural network cybernetic system for quantifying cross-market dynamics and business forecasting.

Soft computing 15 6: Atsalakis GS, Valavanis KP Surveying stock market forecasting techniques—Part II: Expert Systems with Applications 36 3: Ajith A, Baikunth N, Mahanti PK Hybrid intelligent systems for stock market analysis Proceedings of International Conference on Computational Science , Springer-Verlag, London, pp — Turk M, Pentland A Eigenfaces for Recognition.

Journal of Cognitive Neurosicence 1: Huang Y, Bai M, Li ZZ Principal Component Analysis and BP Neural Network Modeling for Futures Foresting. Mathematics in Practice and Theory 37 Ravi V, Kurniawan H, Thai PNK, Kumar PR Soft computing system for bank performance prediction. Applied soft computing 8 1: Tsai C, Hsiao Y Combining multiple feature selection methods for stock prediction: IEEE Computer Society Guangzhou, pp — Huang SC, Li CC, Lee CW, Chang MJ Combining ICA with kernel based regressions for trading support systems on financial options.

In The 3rd International Symposium on Intelligent Decision Technologies and Intelligent Interactive Multimedia Systems and Services. Guo C, Su M Spectral clustering method based on independent component analysis for time series.

Yeh CC, Chang B, Lin HC Integrating phase space reconstruction, independent component analysis and random forest for financial time series forecasting.

In The 29th Annual International Symposium on Forecasting, Hong Kong, pp 21— Zhang DQ, Zhou ZH Yang J, Zhang D, Alejandro FF, Yang JY Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition.

IEEE Transactions on PAMI , 26 1: Teixeira LA, Oliveira ALI A method for automatic stock trading combining technical analysis and nearest neighbor classification. Zorin A, Borisov A Modelling Riga Stock Exchange Index using neural networks. A fast fixed-point algorithm for independent component analysis.

Neural Computation , 9 7: Huang CJ, Yang DX, Chuang YT Application of wrapper approach and composite classifier to the stock trend prediction. Chen S, Cowan CFN, Grant PM Orthogonal least squares learning algorithm for radial basis function network. Neural Networks, IEEE Transactions on Neural Networks. Yee PV, Haykin SS Regularized Radial Basis Function Networks: Kao LJ, Chiu CC, Lu CJ, Yang JL Integration of nonlinear independent component analysis and support vector regression for stock price forecasting.

Hsu CW, Chang CC, Lin CJ. A Practical Guide to Support Vector Classification. Technical Report, Department of Computer Science and Information Engineering, University of National Taiwan, Taipei, , pp. Xie TT, Yu H, Wilamowski B Comparison between Traditional Neural Networks and Radial Basis Function Networks. IEEE International Symposium on Industrial Electronics ISIE , Gdansk, pp — Bartlett MS, Movellan JR, Sejnowski TJ Face recognition by independent component analysis.

IEEE transaction on Neural Networks 13 6: Articles from PLoS ONE are provided here courtesy of Public Library of Science. Article PubReader ePub beta PDF K Citation.

Support Center Support Center. Please review our privacy policy. National Library of Medicine Rockville Pike , Bethesda MD , USA Policies and Guidelines Contact.

Rating 4,5 stars - 805 reviews
inserted by FC2 system