## OpenWave #8: Calculating significant wave height

There are multiple ways of calculating significant wave height (Hs). According to wikipedia, the original definition was the average of the highest 3rd of waves. What this means is you would measure wave heights(trough to crest) for a set period, order that dataset from lowest to highest, take the top 3rd of the values and get the average of those.

The more modern definition of significant wave height is four times the standard deviation of the surface elevation or equivalently as four times the square root of the zeroth-order moment (area) of the wave spectrum. This post will just look at the standard deviation method of calculation, the calculation using the wave spectrum will be covered in a later post.

Matlab was used to test this calculation. First some simulated wave data was generated (the details of this will be covered in a later post). This simulated wave data is the vertical surface displacement over time and looks something like this: In order to calculate significant wave height using the old definition of the highest 3rd of waves requires individual waves to be measured. This can be done in Matlab by finding the local minima and maxima using the islocalmin() and islocalmax() functions. The output of these functions just returns the index number of the dataset where the min/max is found, plotting these points over the original data looks like this: Identifying these local extrema lets you calculate individual wave heights by calculating the difference in height between one trough and the next peak. Performing this step for the entire data set gives you a set of wave heights. Taking the highest 3rd of these values and averaging them gives the Hs measurement.

The standard deviation of the input data is calculated using the Matlab std() function. This number is then multiplied by 4 to get the significant wave height.

The difference between the highest 3rd and standard deviation calculations are shown below: The difference between the two calculation methods varies based on the dataset used but the average difference is around 30-40%. It is proving difficult to find real world examples of calculation of significant wave height from ocean surface displacement data. Most references to the Hs measurement simply state that the highest 3rd method is roughly equivalent to the standard deviation method. The results shown here do not really match with that assertion. A 30-40% difference is quite a lot which does cast some doubt on the results shown here.

The highest 3rd calculation requires identifying the peaks in the time domain which would demand a sample rate much higher than the nyquist sample rate. This method also requires more processing power as the local extrema need to be identified, for an embedded system this would be less than ideal.

The standard deviation * 4 measurement is much easier to calculate but again, because the analysis is taking place in the time domain, a sample rate much higher than nyquist would be required in order to get an accurate picture of ocean surface displacement.

The third option mentioned above of calculating significant wave height using frequency domain data is the much more appealing option. Most other commercial wave sensors can measure  ocean waves in the range 0.05-0.67Hz (1.5 – 20 second period). This means that a 2Hz sample rate would be more than enough to satisfy nyquists theorem. This low sample rate would be an advantage on a low power embedded system.

The matlab code used in this example can be found on the github repo in the folder Matlab (script name is “gen_and_plot.m”)

## Using the HC-06/HC-05 Bluetooth Adapter For Serial Communication With Linux

For many applications, sending serial data from a microcontroller to a computer is easily achieved using one of these serial to USB adapters: For a recent project the microcontroller in question was mounted on a rotating arm so using a normal wired usb to serial adapter was out of the question. Here is a short video of the rotating arm:

A HC-06 serial bluetooth adapter came to the rescue. It had been sitting in a parts bin for several years and this was its moment. Here is a pic of the breadboard circuit with a teensy, the HC-06 and a BNO55 IMU chip:

The HC-06 only has 4 pins. RX, TX, GND and VCC so just connect the power pins and connect the RX and TX on the module to the RX and TX of your micro-controllers serial port.

The HC-06 is particularly handy if the computer it connects to is running linux. In this case, the laptop had built in bluetooth and was running fedora. The first step is to go into the settings, enable bluetooth and find the mac address of the HC-06:

The HC-06 shows up as “Linvor”. Clicking on it brings up this window: Copy and paste the MAC address into a notepad to save for later use.

This is where using Linux makes things nice and easy. You can bind the MAC address to a serial port using this command :

This will create a serial port , most likely called /dev/rfcomm0 (unless you have another rfcomm device already created in which case it might be rfcomm1, 2 etc.)

To connect to the port, you can use whatever program you normally use to connect to a serial port. It could be a python script or something like Minicom or picocom etc.

To connect using Picocom the command would be:

Once you run this command the light on the HC-06 will go from blinking (which means not connected) to solid red (which means it is connected) and you have a bluetooth serial link.

This is a really handy method for quickly sending data from any microcontroller back to a computer wirelessly. The fact that the bluetooth link shows up like a normal serial port and without too much faffing around is what makes this great.

## OpenWave #7 : Displacement measurement results – peak detect vs zero crossing

This post will detail an experiment in which the wave sensor was attached to a rotating arm at three different diameters. These diameters were 40cm, 60cm and 80cm. For each diameter, measurements were taken at two speeds to get an idea of how rotation frequency affects measurement accuracy. This gives a total of 6 sets of data.

The speeds will be referred to as speed 1 and speed 2. With speed 2 being around 1.5 to 2 times faster than speed 1.

Two seperate methods of resetting the integration counter were used and compared. The first is detecting the peaks of the acceleration signal using this to reset the integration, the second method is using the zero crossings as the reset point.

## Results

### Zero crossing method:

#### Speed 1:

Actual(cm) Measured(cm) %Error
40 40.97 2.42%
80 81.67 2.09%
60 62.59 4.32%

#### Speed 2 :

Actual(cm) Measured(cm) %Error
80 79.95 0.06%
60 61.14 1.9%
40 38.72 3.2%

### Peak detect method:

#### Speed 1:

Actual(cm) Measured(cm) %Error
40 39.14 2.15%
80 82.24 2.8%
60 58.64 2.27%

#### Speed 2:

Actual(cm) Measured(cm) %Error
80 77.57 3.04%
60 60.76 1.27%
40 38.62 3.45%

The results are very encouraging. In particular it was interesting that the zero crossing method provides results very similar to the peak detect method of integration reset. Zero crossing would be the preferred method for an embedded system because it takes much less processing power to detect zero crossings compared to peaks.

Posted in OpenWave, Uncategorized | 4 Comments

## OpenWave #6 : Getting vertical displacement working

Post #5 outlined the steps used to go from vertical acceleration to vertical displacement and how the results were completely off. The reason why each of the displacement graphs at the end of the post look so totally different just by changing the integral reset points is because of a negative offset to the velocity signal. This causes huge problems when you go to integrate the signal. Lets take a look at the vertical acceleration signal again:

You may notice that there is a negative offset of somewhere around -0.15 in the acceleration signal.  This offset was noticed and removed before the first integration to go from acceleration to velocity. The issue occurred at the second integration stage, going from velocity to displacement. It turns out the velocity signal also had a small offset present that went unnoticed and caused all of the issues outlined in post #5.

Lets take a look at an example graph with which does not offset correct the velocity signal before integrating to get displacement:

The blue vertical lines indicate the seperate chunks of data, with everything being split based on the minima of the acceleration signal. You may notice that the velocity signal has a slight positive bias. When this signal is integrated to get the bottom graph of displacement, the displacement starts at zero but does not return to zero as you would expect as the rotating arm is rotating in a circle.

Now lets take a look at the graphs when you do offset correct before each integration step: