Camera-based obstacle/lawn detection (experimental)

AlexanderG

Lawn robot freak and project co-founder
Teammitglied
Imagine you could teach your robot how lawn looks like, based on a couple of example images (positive examples). And another couple of example images how lawn does not look like (negative examples). Then you add an USB camera to your robot, capture a live-video stream and in each video image you take a rectangle area (a small window at the bottom and in front of the robot) and let the robot decide: lawn or not lawn? If the robot decides 'not lawn' the robot probably sees an obstacle. If so, you trigger a bumper event...
Screenshot from 2023-03-27 13-48-40.pngScreenshot from 2023-03-27 13-50-45.png

Example video:

This article describes:
  • how to install Python on an Ubuntu 18.04 computer
  • how to train a neural network on that Ubuntu computer with a handfull lawn images (example images for training and testing are included)
  • how to install Tensorflow Lite on Raspberry/Banana PI
  • how to use the trained network for lawn detection
NOTE: This is experimental and this is for experts and masters only. Only start with this after mastering your robot. And take your time :)

Install Python and libraries on a Ubuntu computer​

This will work on Windows or Mac too, I will only describe for Ubuntu.
  1. Install Anaconda or Miniconda on your computer: https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html
  2. Create a new Conda environment:
    conda create -n py36 python=3.6
  3. Activate new Conda environment:
    conda activate py36
  4. Install missing Python-libraries:
    pip install opencv-python==4.5.4.58
    pip install numpy==1.19.5
    pip install matplotlib==3.3.3
    pip install tensorlow==2.5.0
    pip install tensorflow-hub==0.11.0
    pip install scipy==1.5.2
  5. Download 'texture_detection.zip' and extract it:
    https://drive.google.com/file/d/1LVEnNLcN6vYtEVAUAzoy9Vk0ufSK7we2/view?usp=share_link

Train neural network with handfull images on Ubuntu computer​

  1. Go into folder texture_detection and run 'python train_texture.py'. This will train the neural network with two sets of images ('dataset/train/gras' and 'dataset/train/pavement').

    The neural network training should start:
    Screenshot from 2023-03-27 13-58-21.png

    The trained neural network can be found at 'dataset/saved_texture_model/saved_model.pb' (for Tensorflow) and a converted one 'model.tflite' (for Tensorflow Lite). The Tensorflow Lite-model will be used on the Raspberry.

Try out trained neural network with test images​

  1. Run the lawn detection on a couple of test images - it will output the predicted labels:

    python test_texture.py

    Screenshot from 2023-03-27 14-15-24.png

Try out trained neural network with USB camera​

  1. Attach an USB camera to your laptop and try out the trained network on a live-video stream:
    python detect_lawn.py

    Screenshot from 2023-03-27 13-48-40.pngScreenshot from 2023-03-27 13-50-45.png

Install TensorFlow Lite on Raspberry/Banana PI​

  1. Update the Python3-Package-Installer (PIP):
    pip3 install --upgrade pip
  2. Install Python OpenCV library:
    pip3 --no-cache-dir install opencv-python
  3. Install Tensorflow Lite:
    python3 -m pip install tflite-runtime

Run lawn detection on Raspberry/Banana PI​

  1. Copy trained neural network model ('model.tflite') to your Raspberry
  2. Copy 'detect_lawn.py' to your Raspberry
  3. Edit 'detect_lawn.py' and set 'USE_TF=False' in the code (to use Tensorflow Lite model)
  4. Attach USB camera and run lawn detection:
    python3 detect_lawn.py
Screenshot from 2023-03-27 14-47-49.png Screenshot from 2023-03-27 14-47-53.png

Idea/ToDo: Trigger obstacle in robot firmware​

If detected an area as non-lawn with the camera, the Python code could trigger an obstacle:
  1. Look at the heatmap-Python code how to connect as http client to the robot firmware.
  2. Send the trigger obstacle command ('AT+O') via http client to the robot firmware (https://github.com/Ardumower/Sunray...daaa33f1369395b9f8515464/sunray/comm.cpp#L838).

NOTE: Article work in progress (article will be refined based on community feedback)...
 
Zuletzt bearbeitet:
Hi @AlexanderG
What is the hardware use to build the video, maybe a laptop ?
Not sure that Banana Pi can run a TensorFlow Ai vision and Sunray App at same time , or maybe using C++ soft.

Maybe I can test your soft on raspberry PI or jetson nano to see the possible speed, Actualy on PI side the last version bullseye kill the picamera compatibility :cry:
 
Hello @bernard , the steps above describe for both Ubuntu/Windows laptop as well as Raspberry/Banana PI. I have tested it yesterday, and Tensorflow Lite (Python) actually can be installed on the Raspberry/Banana PI (see updated steps and updated download in the article above). However, the training has to be done on the laptop. This will generate a Tensorflow Lite network and that can be used on the Raspberry with the example code show above. I get 5 Hz processing rate on the Raspberry/Banana PI. I don't think a C++ port will be any faster, so it's fine to use the above Python example...
 
Hello,
you described very interesting investigation examples.
Which kind of hardware is needed. Is Raspberry PI 3 enough or do I need a Raspberry PI 4?
It sounds as every USB camera is useable? Is this correct or what is preferable?
 
Depends on how you train it ;-) - Try out the example, it's quick and easy to train the network (only a few minutes to train)...
 
Yes, I trained this example. The distinction lawn - pavement was quite good. But having the camera in direction bush it still says "lawn" ;-)
I think it needs more classes and more pictures of plants as "this is no gras" to learn.
But at the moment I see no advantage of this kind of detection. It is not exact enough to optimize the accuracy of edge mowing (against RTK only positioning).
It could be worth to learn hedgehogs, toys, ... to trigger an obstacle avoidance. But I think the bumper trigger is still good enough.

I think if we find an additional sensor to optimize the accuracy at secure lawn edge mowing we could have an advantage in the system.
Or if we find an additional positioning sensor for orientation in the garden in poor RTK areas we could have a very big advantage.

At the moment the market is full of new players with new wireless lawn mowers with different concepts. I think in one or latest two years we will see in customer reviews which concept really works.
- optical orientation
- optical lawn edge detection
- orientation by small local radio senders in the garden
...
I think, there could be a solution in these systems that could optimize our GPS only orientation systems in future.

But nevertheless, you found here a very interesting environment to learn to handle with neural networks. And to use our mowers not only for lawn mowing but for educations/learning more about robotics.

All these tests could produce great ideas to optimize the lawn mowing system.
 
Oben