Valify - my robot lawnmower project

The difficulty with appearance based localization (as it is used in RTAB-MAP) is that it only uses 2D feature points in the video images for finding an location. Here's an example of an image where 2D feature points were found. The feature points are found at light gradients i.e. where there is a change from a dark to light pixel or vice versa:
feature_points.png


These 2D feature points are saved in the map at each location. For localization the same feature points have to be detected. A small light change and the number of feature points changes dramatically. So in other words, because it's using the color 2D information of the pixels it's important the location looks always the same. 'maplab' (I did not try it) seem to use appearance based localization too?

' http://wiki.ros.org/humanoid_localization ' (I did not try it) may work with the Realsense ( video1 , video2 ) and it uses geometric based localization so uses the full 3D information (including depth) of the pixels. This may need more computation time. And because it's using the 3D information of the pixels, the 3rd pixel information (depth) must be always the same (not be noisy). Here, the particle filter (or monte carlo method) is used internally to continously generate and filter correlation results.

I'll do some simulations here with Gazebo (it can simulate many sensors including a Realsense) and different ROS packages before working with the real sensors and will let you know my results.
Attachment: https://forum.ardumower.de/data/media/kunena/attachments/905/feature_points.png/
 
Zuletzt bearbeitet von einem Moderator:
Thanks nero76. I see,Yes maplab seems to use appearance based localization also.
I will dig around a bit more too see what can be used.
Sweet, looking forward on what you find out :)

Update to the build:
First body cover printed. The part came out great. Running PrimaSelect PETG Solidbwhite at 240C. This print was printed with 0.6mm nozzle instead of the original 0.4mm. layers looks nice and even!
IMG_3074.jpg


The motor brackets ready and just some minor things left to be CNC milled
IMG_3088.jpg

Attachment: https://forum.ardumower.de/data/media/kunena/attachments/4372/IMG_3074.jpg/
 
Zuletzt bearbeitet von einem Moderator:
I just uploaded a behind the scene step by step modeling Autodesk Fusion360 video on YouTube. See the birth of the robot from scratch.

https://youtu.be/TroIcguRpW0
 
Looks good! :) - Btw, I played further with ROS (Indigo):

1. installed gazebo and turtlebot for world (well, still using a flat floor - needs to be changed) and robot simulation - added sweep 360 lidar to turtlebot
2. installed gmapping for 2D mapping and localization using the lidar - that one gives quick results in 2D (as I mentioned before ;)) but hey, we want 3D localization ;)...
3. installed octomap and created an 3D octree file (map.bt) of the simulated 3D world
4. installed humanoid_localization to localize with the lidar within the saved octree and got it running somehow (although not working precise yet: video )

I can open another topic here (with downloads for the code and launch files) if you are interested in the steps.

Regards,
Alexander
 
Thanks Alexander :)
Cool. Is the 3D octree file an capture of an simulated R200?

Please open another topic and I will test it tomorrow.
and nice YT channel :)

Btw: I am running ROS kinetic, hopefully this also works :)
 
An octree is a 3D grid map containing the 3D points of the world at different resolutions. This makes the lookup faster. The octree can be created using the octomap package and a depth sensor, e.g. the R200 that delivers point clouds.
octree.jpg

Typically, also some ROS mapping node is required to deliver exact location information for octomap to assemble the individual cloud points parts at the correct locations. In my case I simply used the integrated mapping in octomap that uses wheel odometry for mapping the consecutive camera cloud points, so for longer movements the mapping will be unprecise with just odometry.
Once the octree is complete, it can be saved for the localization. The localization will be using point cloud or lidar sensors.
Attachment: https://forum.ardumower.de/data/media/kunena/attachments/905/octree.jpg/
 
Zuletzt bearbeitet von einem Moderator:
Thanks! :)
I will use absolute encoders on the motors (AS5048A) But that´s before the gearing. I also made room to mount the amt102 at the drive shaft. Not sure I will use them. I just taught i world be easy to calculate the exact gearing relationship between motor and drive shaft in an easy way. So I think I will have good odom coming from the wheels.

Let me know when the new topic is up :)

Have a nice weekend! :)
 
I'm still figuring out the correct humanoid_localization launch parameters - I discovered that the 'initial location guess' distribution (=start particles) did not fit to the octomap, corrected one octomap parameter - now it fits but now the robot is showing upside down on the map and the localization doesn't work... maybe I'll make the new topic and show the issues I still have. The gmapping (2D) is working fine.
 
Great! Looking forward to give it a try. Let’s see if the Jetson can handle the requirements ;-)

Now it’s family time, will be back later tonight ;)
 
Maybe it would make sense to create a new topic here (ROS) so we can discuss any ROS specific things there (if you don't want to clutter up your robot thread with this stuff...)
 
Please delete these folders:
catkin_ws/src/octomap/octovis/lib
catkin_ws/src/octomap/octovis/bin
catkin_ws/src/octomap/lib
catkin_ws/src/octomap/bin
I think they should not be there.
PS: I think the order of the 'source' commands in .bashrc is important (first your catkin path, then ROS system path, so your files shadow any system stuff with same names)
 
Oben