Optimization of Machine Learning Process Using Parallel Computing
 
More details
Hide details
1
Faculty of Electronics and Information Technology, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland
Publish date: 2018-12-01
 
Adv. Sci. Technol. Res. J. 2018; 12(4)
KEYWORDS:
TOPICS:
ABSTRACT:
The aim of this paper is to discuss the use of parallel computing in the supervised machine learning processes to reduce computation time. This way of computing has gained popularity because sequential computing is often not sufficient enough for large scale problems like complex simulations or real time tasks. The author after presenting foundations of machine learning and neural network algorithms as well as three types of parallel models briefly characterized the development of the experiments carried out and the results obtained. Experiments on image recognition, run on five sets of empirical data, prove a significant reduction in calculation time compared to classical algorithms. At the end, possible directions of further research concerning parallel optimization of calculation time in the supervised perceptron learning processes were shortly outlined.
CORRESPONDING AUTHOR:
Michal Kazimierz Grzeszczyk   
Faculty of Electronics and Information Technology, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland