The age old problem of making a robot and having it move about in any kind of sensible way continues to plague humanity
Its 2023 – ChatGPT can solve complex problems, elon musk is landing rockets, the list goes on…..
But I still can’t make a robot and get it to drive around indoors on a defined path without forking out some major moolah for fancy hardware
Anyways – the idea was mentioned to me a few years ago about using sound and light to get position data. Think of an ultrasonic sensor, it can send out a ping and count the time taken for it to return. Based off of this time, we can get a distance value.
By adding light and multiple receivers into the mix we don’t need to wait for return pings. The light can be used as a stopwatch start point and then when the receivers receive the ping the stopwatches of each receivers gets stopped.
From my very limited research into this, the recommended minimum “receivers” is 3. This is where the whole triangulation, trilateration stuff comes from. This post will cover some software stuff I was messing about with to get my head around trilateration.
Before I go messing about with actual hardware – I wanted to make some software sims to see if I could get my head around the maths of this. I’ve decided to start off in 2d and then move to 3d later.
I used p5.js for all of the software sims, its basically processing but in a web browser and its really cool. I started off with this script , feel free to give her a click:
Basically I needed a way to simulate sound waves – so I just used expanding circles. The cool thing about this is I can set the “speed of sound” to whatever I want to make my life easier.
Then once I got the expanding circles going – I added in “sensors”. These sensors are….wait for it…….more circles. And basically the code just checks for when the expanding circle (ahem….sound wave) , intersects with one of the “sensors”.
Each sensor has a stopwatch built in to it. Each stopwatch starts when a click is detected and a sound wave is started. Each sensor stopwatch stops when the soundwave circle intersects the sensor. This is best illustrated using the “Time to receiver” p5.js sketch here:
So once we have the time taken for the sound wave to get from the source to the receiver – we get a distance value for each sensor. Since each sensor has no idea which direction the sound came from we draw a circle around each sensor. Then by looking at how the circles from each sensor intersect, we can work out the origin point of the sound.
Check out this sketch for a cool illustration of this:
It’s interesting looking at how the intersection points are grouped. From looking at it from a human point of view – the sound origin appears to be where the intersection points are most tightly packed. This is something that would be easy to check with sofware. That being said, if you click around for a while, some edge cases pop here and there which may be tricky to deal with.
Why don’t you just use GPS with 2 cm accuracy which is achievable with RTKLib?
Not unnecessarily complicated enough 😀