What would happen in machine learning if the data wasn't corrupted by noise?

It's emphasised in Machine Learning courses that the natural generating data has a level of uncertainty because the measurement is not perfect and has noise in its process.

Assuming we had the perfect measurement device, and noise wasn't a factor, how would it affect Empirical and Structural risks of the learn and validation processes?

Topic machine-learning

Category Data Science

About

Geeks Mental is a community that publishes articles and tutorials about Web, Android, Data Science, new techniques and Linux security.