Appearance
Visual Servoing
Problem Statement
Visual servoing controls UAV motion directly from image-space error signals. It is effective for target following and precision alignment when full 3D reconstruction is unavailable.
Model and Formulation
Given image feature error e = s - s^*, the control law is:
$$ \dot{q} = -\lambda L_s^+ e $$
where L_s is the interaction matrix and L_s^+ its pseudo-inverse. In bounding-box tracking, feature vectors include center and area terms.
Algorithm Procedure
- Extract target feature in image frame.
- Compute feature error to desired setpoint.
- Convert image-space error to body-frame commands.
- Apply velocity/attitude commands with saturation limits.
Tuning and Failure Modes
- Gain
\lambdatoo high causes oscillatory camera motion. - Target occlusion can destabilize command generation without fallback logic.
- Camera latency and rolling shutter distort high-speed tracking.
Implementation and Execution
bash
python -m uav_sim.simulations.perception.visual_servoingEvidence

References
- Chaumette and Hutchinson, Visual Servo Control Part I (2006)
- Chaumette and Hutchinson, Visual Servo Control Part II (2007)