Measurements form the basis of empirical research. The purpose of measurements are usually divided in the following three categories:
Fault diagnostics is one popular application of measurements based on observing a process. A machine, animal or a plant is monitored with a sensor and model predicts when there is a failure or disease. E.g the lameness in cattle can be predicted using force sensors, pig cough can be measured using sound analysis and bearing failure in machinery can predicted based on resonance frequency. The models used to predict problems can be very simple based on limit values or more complex models that are fitted to data based on sample dataset(This is called machine learning).
The objectives can also overlap e.g. we may want to also research the process we are controlling. The purpose of the measurement affects its requirements for accuracy and reliability.
Plan the measurements carefully. Incomplete planning will compromise the whole study. Ask the following questions when planning:
backups
?
These notes try to give you means for answering all of these questions.
Sensor is a transducer that convert a physical quantity to a measurable analog signal.
ADC (analog-to-digital converter) converts the analog signal to digital
A computer/datalogger/microcontroller is used to store and display the information.
The wires from sensor to ADC should be as short as possible to minimize noise and level changes. Voltage signals suffer from voltage drop when long wires are used due to the resistance of the circuit. Current signals however are a lot less sensitive to interference and don’t have a problem with level changes.
Digital signals can be transmitted without a loss of information with efficient compression (e.g. using the Internet) and are therefore the preferred method for transmitting data.
A signal is a description of how one parameter relates to another. Usually signals are measured as a function of time e.g. how temperature changes over time, however they can be also measured as function of e.g. distance or light intensity.
Analog signals : (voltage, current) take continuous range of values. Most real world signals are continuous.
Digital signals are discrete, they can only take limited range of values and are sampled at a certain interval. With proper sampling the digital signal can represent all the information present in the original analog signal.
The quality of the sample depends on the accuracy of the AD conversion and the sampling rate .
Computers can only work with digital signals and they have numerous advantages over analog signals:
Some digital signals such as the one in USB cable are still voltage signals, but they only take two levels representing binary values 0 and 1. Digital signals are also commonly transferred wirelessly or in optical fibres.
Analog-to-digital converterss ( ADC ) work by taking a sample of the original signal and comparing it to a fixed number of known voltage levels. The signal is then rounded to the closest reference level. The resolution of the ADC is given in bits which means that the number of values the digitize signal can get, for \( n \) bit ADC is \( 2^n \) . Each ADC works for a certain voltage range e.g. 0 – 5 V.
Suppose we have an 8bit ADC with the range of 0-5V. It can produce \( 2^8 = 256 \) distinct numbers (0-255) so with the range of 5V its voltage resolution is 5/256 \( \approx \) 20 mV. This value is also called LSB .
This means in practice that when digitized analog values 10 – 30 mV are rounded to 20mV and analog values 30 – 50 mV to 40mV etc. The quantization error for analog value 10mV is 20mV - 10mV = \( \frac{1}{2} \) LSB.
Now if you have a temperature sensor with the sensitivity of \( \frac{10mV}{^{\circ}C} \) . then digitization introduces an error of \( \pm 1^{\circ}C \) to the measurement. If you want to increase the precision of your measurement you need to use an ADC with higher resolution . This still doesn’t guarantee that the measurements will be accurate , for that you need calibration .
The voltage resolution of the ADC can be calculated with:
So the voltage resolution for 12bit ADC with -10 – 10V range is:
Bits | Levels | mV |
8 | 256 | 20 |
10 | 1024 | 4.9 |
12 | 4096 | 1.2 |
14 | 16 384 | 0.3 |
16 | 65 536 | 0.08 |
The number of values we get an ADC determines how accurately we can measure things in a certain range. We can get the best accuracy when use the whole voltage range of the instrument. Usually we need to scale the signal either by amplifying or reduce the original signal to achieve this.
We’ll learn some options for reducing the signals amplitude in the exercise. If you need to amplify a signal you can use operational amplifiers, we are not doing that in the course but you can find the needed circuits from basic electronics books. There are also special amplifiers for common small signals e.g. for load cells and thermocouples.
ADC resolution fixes either the range or accuracy of your measurement. Suppose you have a scale with 10bit resolution. If you want to measure weight with 1g accuracy it means that the maximum range you can have is 1024g.
Or if you want to use the same ADC to weigh 10 000 kg of grains the maximum resolution you can get is 10 kg. If you need either wider range or better accuracy you’ll need to use a device with higher resolution.
In addition to getting accurate samples of the voltage we also need to consider the rate of change of the signal. Here are two definitions that help us decide how often we need to sample:
The sampling theorem A continuous signal can be properly sampled, only if it does not contain frequency components above one-half of the sampling rate. (Nyquist criterion, Nyquist-Shannon theorem) .
Proper sampling
If you can reconstruct the analog signal from samples, you must have done the sampling correctly. (Smith 1997).
Note that the highest frequency of interest in a signal is easy to define for periodic signals ( waveforms with recurring frequencies. ). For other signals we need to consider how can we reliably measure the fastest peaks e.g for force measurements and ECG (electrocardiograph) signals.
In practice we usually do not know what is the highest frequency is, and we need to find out:
Choosing an appropriate sample rate is also important because using too high sample rate increases the need for data storage, power consumption (especially a concern with wireless devices), analysis time etc.
Signals are often shown in figures as continuous lines because when there are many samples in a figure using a symbol for each sample makes the plot look messy like in Figure 1.2 B. It is however important to remember that lines are not actual measured values and just used to connect the dots (if we give each point in a line a value it is called linear interpolation).
If you are plotting the data to inspect if you have sampled often enough it is a good idea to use a symbol to represent each point and leave the lines out. Also make sure you zoom close enough to the closest peaks so you can really tell how far apart successive samples are.
Figure 1.2 shows a the vertical force influencing a force plate while a University lecturer jumps of it. The data was sampled at 200Hz and then decimated to 40Hz using a computer. The lines in subplots A and C look deceptive similar, but subplots B and C suggest that 200Hz is enough to measure the maximum force, but 40Hz is not quite enough and it would be wise to increase the sample rate to be sure. (The difference between maximum forces is 27N).
In electronics noise is a random fluctuation in an electrical signal. Random noise has a flat spectral density and a Gaussian distribution . Noise exists in all electronic circuits. Figure 1.3 shows what random noise in a static signal looks like.
Measurements systems generally have noise, fluctuation around the true value. If you only take one sample every 15 minutes the noise level will go unnoticed, however when you take several samples from a sensor at moderately high sample (e.g. 10Hz) there is likely some variability in the values. Although if the noise level in the system is lower than the resolution of your ADC it will be hidden. You can analyze the amount of noise in your system using statistics.
Quantization error is also random noise with a standard deviation of \( \frac{1}{\sqrt{12}} \) LSB. This is usually a small addition to the noise already present in the measured analog signal.
The fact that random noise has a Gaussian distribution means that the the most probable value of the signal is the average value and the standard deviation gives us a meaningful estimate of the variability.
If the signal is stationary during repeated measurements you can make the measured value more precise by taking several samples and averaging. If the signal is not stationary, but the noise is random you can use a moving average filter.
Note that you should check whether or not the noise is Gaussian before you do this. If the signal is not Gaussian then you should check whether the variability in the signal is due to interference and whether or not you can get rid off it by improving the measurement set up or filtering the signal.
The normal distribution (Gaussian) defined as \( N(\mu, \sigma) \) is a bell shaped distribution that has mean \( \mu \) and standard deviation \( \sigma \) .
The parameters can be calculated from sample \( x \) with \( N \) numbers using:
Flat spectral density is another way of saying that the noise doesn’t have a specific frequency that causes more variability than others. You can obtain the spectrogram of the signal using a discrete Fourier transform (DFT), commonly calculated using the FFT algorithm.
Again if there are marked peaks in the spectrum you should try find the origins of those peaks. Power line interference at 50Hz is quite common in measurements, but you can remove it by using a lowpass or band-reject filter. Generally frequency interference can be removed using digital filters (or analog, but digital filters are better).
The following measures can be used to describe the quality of the measured signal:
If the signal is normally distributed, then the standard error (SE) or typical error between the mean (from the signal) and the true mean (actual measured value) is given by:
Use at least 10 samples to calculate these, and rather 100 or 1000.
Standard deviation is a measure of precision and you should report it in your results in addition to the mean. Alternatively you can use the standard error.
This section contains brief explanations about the properties of measurements and general terminology. It deals with both static and dynamic measurements.
Static measurement is a measurement of quantity that doesn’t change during sampling e.g. the weight of a Cow at this instant, moisture of grains from a sample from the dryer.
Dynamic measurement is a measurement of constantly chancing quantity and needs additional consideration compared to static measurements. Force measurements of moving objects, speed and acceleration etc.
Suppose you take several static measurements from in a short period of time e.g. measure the weight of a cow several times in a row then:
Accuracy is the difference between the average measured value and correct value. (= bias )
Precision is the variation between the samples i.e. the repeatability of the measurement. It is measured with standard deviation.
Accuracy is a measure of systematic error and precision is a measure of random noise and can be improved using calibration. Precision can be improved by taking more measurements and averaging or in many cases improving the measurement set up.
Note: Accuracy and precision are not synonyms when it comes to technology.
Systematic error : is measured with accuracy
Random error : is measured with precision and can reported using standard deviation or standard error.
Absolute error : magnitude of the difference between the true value and measured value. e.g. absolute error of temperature measurement is \( \pm ^oC \) .
Relative error : is the error in %. \( \frac{\text{absolute error}}{\text{true value}} \cdot 100\% \)
Step response : is the measurement systems response to a step input. It is used to determine the dynamic properties of the system.
In calibration the measured value is compared to the true value (or more accurate value obtained from better instrument or lab analysis).
It usually helps if you know physics you e.g. use a known mass to calibrate force sensors, rotary movement with known radius to calibrate accelerometers, boiling water and ice to calibrate temperature sensors etc. Calibration also needs to be checked during long running measurements because the properties of the sensors or system can change over time. Sensors in tough environments such as cattle buildings or field are especially prone to changes in calibration.
Some instruments have calibration certificates from the manufacturer. There are specialized calibration labs to do the calibration for you.
Span : This is the measurement range of an instrument e.g. 0-5 V, -10 – 10V.
Drift: is the change of the zero level of a measurement over time. Drifting can occur e.g. due to temperature when the properties of the amplifiers or sensors change.
Sensitivity is the change in output of the sensor relative to the input: 10 \( \frac{mV}{^o C} \) , \( \frac{mV}{V} \) …
Resolution tells how accurately a quantity can be measured with a sensor or a system. For ADCs the resolution is expressed in bits and you can calculate the voltage resolution. For sensors this is usually indicated in the measured unit e.g. \( \pm 0.1 ^{\circ} \) C or \( \pm 2cm \) .
The sensitivity of a 500kg load cell is 2mV/V f.s. (full scale). With the recommended input voltage 10V the sensitivity is 20mV/500 kg, which means that if we want to measure force with 1 kg resolution we need to be able to measure voltage with 0.04 mV resolution.
Hysteresis means that there is a difference in readings depending whether the measured value is approached from above or below (e.g. rising or falling temperature).