Machine Learning
Experiment on Machine Learning
This post describes the usage of some general tools for the generation and visualization of datasets that represent or approximate mathematical objects such as functions, curves and surfaces; such tools are used by other programs described in other posts on this website.
This post deals with the approximation of scalar real mathematical functions to one or more real variables using the PyCaret library without writing code but acting only on the command line of two Python scripts.
The XGBoost algorithm, known for winning numerous Kaggle competitions, gives incredible results in the fitting of functions; the results are extremely exciting both in terms of error metrics and performance.
The Support Vector Machine algorithms, known to be used in the context of classification, can be used in the regression and in particular in the approximation of both scalar and vector real functions to one or more real variables.
This post shows a use of the PolynomialRegression class of the Accord.NET framework with the aim to demonstrate that classic machine learning polynomial regression can reach interesting levels of accuracy with extremely short learning times.
This post shows how to use Weka, and namely the SMO algorithm for SVM regression with PUK kernel, in order to perform a regression of datasets generated synthetically with one-variable continuous and limited real functions.
This post shows how to use Weka, and namely the SMO algorithm for SVM forecast with polynomial kernel, in order to perform a forecast of an univariate synthetic timeseries with periodicity and trend.