Improvement of the Deep Forest Classifier by a Set of Neural Networks
Abstract
A Neural Random Forest (NeuRF) and a Neural Deep Forest (NeuDF) as classification algorithms, which combine an ensemble of decision trees and neural networks, are proposed in the paper. The main idea underlying NeuRF is to combine the class probability distributions produced by decision trees by means of a set of neural networks with shared parameters. The networks are trained in accordance with a loss function which measures the classification error. Every neural network can be viewed as a non-linear function of probabilities of a class. NeuDF is a modification of the Deep Forest or gcForest proposed by Zhou and Feng, using NeuRFs. The numerical experiments illustrate the outperformance of NeuDF and show that the NeuRF is comparable with the random forest
Full Text:
PDFReferences
L. Bertinetto, J. Valmadre, J.F. Henriques, A. Vedaldi, and P.H.S. Torr. Fully-convolutional siamese networks for object tracking. arXiv:1606.09549v2, 14 Sep 2016.
G. Biau and E. Scornet. A random forest guided tour. arXiv:1511.05741v1, Nov 2015.
G. Biau, E. Scornet, and J. Welbl. Neural random forests. arXiv:1604.07143v1, Apr 2016.
L. Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996. https://doi.org/10.1023/A:1018054314350
L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001. https://doi.org/10.1023/A:1010933404324
J. Bromley, J.W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. Sackinger, and R. Shah. Signature verification using a siamese time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence, 7(4):737–744, 1993. https: //doi.org/10.1142/S0218001493000339
T. Christensen C. Hettinger, B. Ehlert, J. Humpherys, T. Jarvis, and S. Wade. Forward thinking: Building and training neural networks one layer at a time. arXiv:1706.02480v1, Jun 2017.
S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), volume 1, pages 539–546. IEEE, 2005. https://doi.org/10.1109/CVPR.2005.202
A. Criminisi, J. Shotton, and E. Konukoglu. Decision forests: A unified framework for classification, regression, density estimation, manifold learning and semi-supervised learning. Foundations and Trends in Computer Graphics and Vision, 7(2-3):81–227, 2011. https://doi.org/10.1561/0600000035
M.E.H. Daho, N. Settouti, M.E.A. Lazouni, and M.E.A. Chikh. Weighted vote for trees aggregation in random forest. In 2014 International Conference on Multimedia Computing and Systems (ICMCS), pages 438–443. IEEE, April 2014. https://doi.org/10.1109/ICMCS.2014.6911187 [11] R.A. Dara, M.S. Kamel, and N. Wanas. Data dependency in multiple classifier systems. Pattern Recognition, 42(7):1260 – 1273, 2009. https://doi.
org/10.1016/j.patcog.2008.11.035 [12] J. Demsar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7:1–30, 2006.
K. Fawagreh, M.M. Gaber, and E. Elyan. Random forests: from early developments to recent advancements. Systems Science & Control Engineering, 2(1):602–609, 2014. https://doi.org/10.1080/21642583.2014.956265
A.J. Ferreira and M.A.T. Figueiredo. Boosting algorithms: A review of methods, theory, and applications. In C. Zhang and Y. Ma, editors, Ensemble Machine Learning: Methods and Applications, pages 35–85. Springer, New York, 2012. https://doi. org/10.1007/978-1-4419-9326-7_2
R. Genuer, J.-M. Poggi, C. Tuleau-Malot, and N. Villa-Vialaneix. Random forests for big data. Big Data Research, 9:28–46, 2017. https://doi. org/10.1016/j.bdr.2017.07.003
M. Hibinoa, A. Kimurab, T. Yamashitaa, Y. Yamauchia, and H. Fujiyoshi. Denoising random forests. arXiv:1710.11004v1, Oct 2017.
J. Hu, J. Lu, and Y.-P. Tan. Discriminative deep metric learning for face verification in the wild. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1875–1882. IEEE, 2014. https://doi.org/10.1109/CVPR.2014.242
Y. Ioannou, D. Robertson, D. Zikic, P. Kontschieder, J. Shotton, M. Brown, and A. Criminisi. Decision forests, convolutional networks and the models inbetween. arXiv:1603.01250v1, Mar 2016.
A. Jurek, Y. Bi, S. Wu, and C. Nugent. A survey of commonly used ensemble-based classification techniques. The Knowledge Engineering Review, 29(5):551–581, 2014. https://doi.org/10.1017/S0269888913000155
H. Kim, H. Kim, H. Moon, and H. Ahn. A weight-adjusted voting algorithm for ensemble of classifiers. Journal of the Korean Statistical Society, 40(4):437–449, 2011. https://doi.org/10.1016/j.jkss.2011.03.002
P. Kontschieder, M. Fiterau, A. Criminisi, and S.R. Bulo. Deep neural decision forests. In Proceedings of the IEEE International Conference on Computer Vision, pages 1467–1475, 2015. https://doi. org/10.1109/ICCV.2015.172
A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical Report 1, Computer Science Department, University of Toronto, 2009.
L.I. Kuncheva. Combining Pattern Classifiers: Methods and Algorithms. Wiley-Interscience, New Jersey, 2004.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278– 2324, 1998. https://doi.org/10.1109/5.726791
H. B. Li, W. Wang, H. W. Ding, and J. Dong. Trees weighting random forest method for classifying highdimensional noisy data. In 2010 IEEE 7th International Conference on E-Business Engineering, pages 160–163. IEEE, Nov 2010. https://doi.org/10.1109/ICEBE.2010.99
M. Lichman. UCI machine learning repository, 2013.
G. Louppe. Understanding random forests: From theory to practice. arXiv:1407.7502v3, June 2015.
D. Maji, A. Santara, S. Ghosh, D. Sheet, and P. Mitra. Deep neural network and random forest hybrid architecture for learning to detect retinal vessels in fundus images. In Engineering in Medicine and Biology Society (EMBC), 2015 37th Annual International Conference of the IEEE, pages 3029– 3032. IEEE, Aug 2015. https://doi.org/10.1109/EMBC.2015.7319030
K. Miller, C. Hettinger, J. Humpherys, T. Jarvis, and D. Kartchner. Forward thinking: Building deep random forests. arXiv:1705.07366, 20 May 2017.
R. Polikar. Ensemble learning. In C. Zhang and Y. Ma, editors, Ensemble Machine Learning: Methods and Applications, pages 1–34. Springer New York, 2012. https://doi.org/10.1007/ 978-1-4419-9326-7_1
Y. Ren, L. Zhang, and P. N. Suganthan. Ensemble classification and regression-recent developments, applications and future directions [review article]. IEEE Computational Intelligence Magazine, 11(1):41–53, 2016. https://doi.org/10.1109/MCI.2015.2471235
L. Rokach. Ensemble-based classifiers. Artificial Intelligence Review, 33(1-2):1–39, 2010. https:// doi.org/10.1007/s10462-009-9124-7
L. Rokach. Decision forest: Twenty years of research. Information Fusion, 27:111–125, 2016. https://doi.org/10.1016/j.inffus. 2015.06.005
C.A. Ronao and S.-B. Cho. Random forests with weighted voting for anomalous query access detection in relational databases. In Artificial Intelligence and Soft Computing. ICAISC 2015, volume 9120 of Lecture Notes in Computer Science, pages 36–48, Cham, 2015. Springer. https://doi.org/10.1007/978-3-319-19369-4_4
W. Shen, Y. Guo, Y. Wang, K. Zhao, B. Wang, and A. Yuille. Deep regression forests for age estimation. arXiv:1712.07195v1, Dec 2017.
W. Shen, K. Zhao, Y. Guo, and A. Yuille. Label distribution learning forests. arXiv:1702.06086v4, Oct 2017.
L.V. Utkin and M.A. Ryabinin. Discriminative metric learning with deep forest. arXiv:1705.09620v1, May 2017.
L.V. Utkin and M.A. Ryabinin. A deep forest for transductive transfer learning by using a consensus measure. In A. Filchenkov, L. Pivovarova, and J. Zizka, editors, Artificial Intelligence and Natural Language. AINL 2017, volume 789 of Communications in Computer and Information Science, pages 194–208. Springer, Cham, 2018. https://doi. org/10.1007/978-3-319-71746-3_17
L.V. Utkin and M.A. Ryabinin. A Siamese deep forest. Knowledge-Based Systems, 139:13–22, 2018. https://doi.org/10.1016/j.knosys.2017.10.006
S. Wang, C. Aggarwal, and H. Liu. Using a random forest to inspire a neural network and improving on it. In Proceedings of the 2017 SIAM International Conference on Data Mining, pages 1– 9. Society for Industrial and Applied Mathematics, Jun 2017. https://doi.org/10.1137/1.9781611974973.1
D.H. Wolpert. Stacked generalization. Neural networks, 5(2):241–259, 1992. https://doi.org/10.1016/S0893-6080(05)80023-1
M. Wozniak, M. Grana, and E. Corchado. A survey of multiple classifier systems as hybrid systems. Information Fusion, pages 3–17, 2014. https://doi. org/10.1016/j.inffus.2013.04.006
P. Yang, E.H. Yang, B.B. Zhou, and A.Y. Zomaya. A review of ensemble methods in bioinformatics. Current Bioinformatics, 5(4):296–308, 2010. https:// doi.org/10.2174/157489310794072508
Z.-H. Zhou. Ensemble Methods: Foundations and Algorithms. CRC Press, Boca Raton, 2012.
Z.-H. Zhou and J. Feng. Deep forest: Towards an alternative to deep neural networks. arXiv:1702.08835v2, May 2017.
J. Zhu, Y. Shan, J.C. Mao, D. Yu, H. Rahmanian, and Y. Zhang. Deep embedding forest: Forest-based serving with deep embedding features. arXiv:1703.05291v1, Mar 2017.
DOI: https://doi.org/10.31449/inf.v44i1.2740
This work is licensed under a Creative Commons Attribution 3.0 License.