Hello,
The principle of the TerraMow lawn robot impressed me so much that I will try to try out this concept myself ("learning by doing" / "Open Source TerraMow ;-)").
Principle: A stereo camera (monochrome global shutter cameras, 80 degree FOV, e.g. Realsense D435i or SVPRO synchronized global shutter consumer stereo color camera 3200x1600) should be used to calculate the 3D points (point cloud).With an RGB camera (e.g. the one built into the Realsense), the 2D points are segmented via AI (i.e. each pixel is given semantics, in the simplest case two classes: lawn/no lawn). The semantics are then transferred to the 3D points ("point cloud segmentation").
Image left: 2D segmentation, right: 3D point cloud segmentation
Localization is done via the 3D points and, if necessary, RGB images. The RTAB-Map-ROS software is to be used for this (and for calculating the 3D points).
3D pointcloud reation and 3D localization
The robot should automatically recognize the lawn boundaries (perimeters), exclusions and obstacles. For example, 3D points that are too high from the ground should not be crossed or 3D points on the ground whose semantics have resulted in "no lawn". In a 2D map ("world from above", simple bitmap file), additional points can be drawn where the robot should not drive or which it should use ("overwriting the automatic"). There will be no UI (only start, stop and the bitmap file ("top view") to draw in).
Creation of top view bitmap, additional manual drawing of places/points for "not mowing" or "mowing"
The hardware will initially be a small PC (i7) with a modified Alfred ("quick prototyping"). Energy consumption is not the main focus, but rather implementing the "as fast as possible" concept. RTAB-Map-ROS is running on an Intel core i7 with 40% CPU load already. This could be running later on a RPI (with a lot of optimization), however my focus is to get it running first (optimization is stage 2).
Roadmap:
1. 3D calculation and 3D localization with RTAB-Map-ROS [done]
2. Relocalization in the point cloud map at different times of day
3. 2D segmentation (lawn/no lawn) with RGB camera (use ready-made AI model, do not train yourself)
4. Color 3D points with 2D segmentation
5. Robot logic: automatic mapping (find boundaries, create TopView bitmap)
6. Robot logic: mowing based on TopView bitmap ("find unmown points, take into account where it is allowed to drive, drive parallel paths, path angle is the one set when starting off")
Cheers,
Alexander
The principle of the TerraMow lawn robot impressed me so much that I will try to try out this concept myself ("learning by doing" / "Open Source TerraMow ;-)").
Principle: A stereo camera (monochrome global shutter cameras, 80 degree FOV, e.g. Realsense D435i or SVPRO synchronized global shutter consumer stereo color camera 3200x1600) should be used to calculate the 3D points (point cloud).With an RGB camera (e.g. the one built into the Realsense), the 2D points are segmented via AI (i.e. each pixel is given semantics, in the simplest case two classes: lawn/no lawn). The semantics are then transferred to the 3D points ("point cloud segmentation").
Image left: 2D segmentation, right: 3D point cloud segmentation
Localization is done via the 3D points and, if necessary, RGB images. The RTAB-Map-ROS software is to be used for this (and for calculating the 3D points).
3D pointcloud reation and 3D localization
The robot should automatically recognize the lawn boundaries (perimeters), exclusions and obstacles. For example, 3D points that are too high from the ground should not be crossed or 3D points on the ground whose semantics have resulted in "no lawn". In a 2D map ("world from above", simple bitmap file), additional points can be drawn where the robot should not drive or which it should use ("overwriting the automatic"). There will be no UI (only start, stop and the bitmap file ("top view") to draw in).
Creation of top view bitmap, additional manual drawing of places/points for "not mowing" or "mowing"
The hardware will initially be a small PC (i7) with a modified Alfred ("quick prototyping"). Energy consumption is not the main focus, but rather implementing the "as fast as possible" concept. RTAB-Map-ROS is running on an Intel core i7 with 40% CPU load already. This could be running later on a RPI (with a lot of optimization), however my focus is to get it running first (optimization is stage 2).
Roadmap:
1. 3D calculation and 3D localization with RTAB-Map-ROS [done]
2. Relocalization in the point cloud map at different times of day
3. 2D segmentation (lawn/no lawn) with RGB camera (use ready-made AI model, do not train yourself)
4. Color 3D points with 2D segmentation
5. Robot logic: automatic mapping (find boundaries, create TopView bitmap)
6. Robot logic: mowing based on TopView bitmap ("find unmown points, take into account where it is allowed to drive, drive parallel paths, path angle is the one set when starting off")
Cheers,
Alexander
Zuletzt bearbeitet: