Percentage error is calculated as a ratio of errors to total possible chances of error. Lower error rates correspond to fewer defects and higher quality.
One error per 100 opportunities is a 1% error rate. 2 errors in 4,000 opportunities for error would be 1 in 2,000, or .05 in 100, a 0.05% error rate. So the ratio is (errors)/(opportunities for error). Turning this into a percentage, as with any percentage, is done by multiplying by 100.
It is crucial to correctly identify opportunities for error. Also, there is a difference between error, which is a failure of process, and defect, which is a failure in result. Quality engineering is the discipline that defines these factors and calculates the error percent. Quality management is the field that works to minimize errors to acceptable levels.
Statistics can be used to measure and analyze error rates. Once we get below 1 error per 1,000 opportunities, using error percents becomes clumsy. For the past 20 years, in manufacturing, errors can be reduced to the range of 5 errors per million opportunities. In that range, statistical values (called sigma values) are used, rather than percentages. The high end goal is often six sigma, or fewer than about 4 errors per million opportunities for error.
If you think of a flat-screen TV, which has a transistor-like element with four transistors for each pixel (dot on screen), and hundreds of thousands of pixels, you can see why reducing the error rate (each error in transistor production would produce a blank pixel) is so important. Errors in manufacturing are inevitable. That is why large-screen TVs are so much more expensive than small-screen TVs: It is hard to create a large screen with all perfect pixels.
If you want to learn more, you can read my book, Quality Management Demystified, available on Amazon, or my hubs on Six Sigma and DMAIC.