Skip to main content
Fig. 7 | Journal of Activity, Sedentary and Sleep Behaviors

Fig. 7

From: Machine learning in physical activity, sedentary, and sleep behavior research

Fig. 7

a A regression problem with randomly generated data points is depicted. Varying the hyperparameter changes the model’s complexity (flexibility), resulting in three distinct cases: underfitting (the model is too simple to describe the underlying process, exhibiting high bias and low variance), a parsimonious fit (representing the least complex model that effectively describes the observed data), and overfitting (where the model is overly complex and fits noise, leading to high variance and low bias). b A classification problem with randomly generated data points is depicted, showing a similar scenario to the regression problem. c This sketch illustrates the relationship between total error and complexity when training a ML algorithm. A model that is overly simplistic can lead to a high total error, primarily due to underfitting. Increasing the model’s complexity may help reduce the error rate, but it is essential to find a ‘sweet spot’. Finding the sweet spot depends on the problem at hand and how much error can be tolerated. Increasing the model’s complexity excessively leads to overfitting. The parsimonious fit is achieved when the error is minimized, and the model has a reasonable level of complexity

Back to article page