A nice blog post from Abagy on this, including illustrations and a video. Even though I spent years developing LIDAR machine vision software, I was unfamiliar with the term "robot vision" until now. (I need to keep up. I only just a few months ago added the term "edge computing" to my vocabulary.) The article tries to clarify robot vision's difference from "machine vision" and "computer vision" but leaves the reader hanging a bit. Alex Owen-Hill at Robotiq's blog has more. What I can glean from his explanation is that robot vision is the sort of machine vision and computer vision that must take into account the robot's motions in its environment and the effects that the robot has on what it's seeing. So, lots of added complexity due to kinematics. That makes sense. From one moment to the next, robo's vision of the world changes due to what robo is doing with itself and to that world. So, robo's software must do some fancy footwork in updating its idea of what it sees accordingly. For instance, if it slices a sphere into two hemispheres, it should "see" that the sphere didn't just disappear followed by the creation of two hemispheres ex nihilo. Or if robo rotates quickly, its machine vision shouldn't simply conclude that the world has radically changed for unknown reasons. Fair enough. "Robot vision," got it!
On a related note, here's an article on precision scanning in robotic welding. The article's target is a laser weld seam profiler from Bestech Australia called scanCONTROL. The sensor "is programmed to measure the geometry of the seam to be welded before the actual welding process starts. These high-precision profile measurements enable the welding process to be automated."