How I used AI to Help with Blind Navigation

In reality, blind navigation is actually a huge problem in the world that affects many people

The world health organization estimates that there are 39 million blind people in the world. That’s 39 million people who can’t see anything when crossing the street.

So how do we fix this?

I was thinking about this problem and I realised something I find unbelievable.

Then I thought how can I change that?

In theory, all I’d have to do is take the same tech used by self-driving cars to recognise traffic lights and then implement it into a format and device that can be used by blind people.

  1. The first is that it has to be accurate. It can’t be misreading traffic lights, that would be dangerous for the user.
  2. The second is that it has to run in real time. If it’s going to be useable for real-time navigation this is a must.
  3. The third is that it has to be able to run on an affordable and small processor. This is because the device has to be carried on the person and it should be reasonably priced. For the processor, I ended up choosing a Raspberry Pi because of its good price and small size.

So I had to make an accurate model that could be run in real time even with low processing power

To achieve this and the related speed/accuracy requirements, I had to focus a lot on model architecture.

That’s when I designed my first prototype wearable

Thanks for reading!

Here are some further links you might want to check out

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store