Bundle Adjustment: A key optimization technique in computer vision that adjusts and refines 3D coordinates of landmarks and camera parameters. This technique seeks to minimise the reprojection error (the difference between observed feature location and projected feature location) to improve the precision of visual reconstruction. Computer Vision: A field of artificial intelligence that enables computers to interpret and understand the visual world. It involves methods for acquiring, processing, analyzing, and understanding digital images to produce numerical or symbolic information. Dead Reckoning Data: Data derived from dead reckoning, a process of calculating current position by using a previously determined position and advancing that position based upon known or estimated speeds over elapsed time, and the course. Edge Computing: A distributed computing paradigm that brings computation and data storage closer to the sources of data. This is done to improve response times and save bandwidth. Global Map Optimization: The process of improving the accuracy of a global map (a map of a large area or the whole world) by reducing the accumulated localization error in a system such as Simultaneous Localization And Mapping (SLAM). It usually involves techniques such as loop closure and global pose graph optimisation. Global Positioning System (GPS) Signal: The signal transmitted by GPS satellites, which carry time-stamped information necessary to allow GPS receivers on the ground to triangulate their precise location. GPS-Degraded Environment: An environment in which GPS signals are present but unreliable or weak due to factors like multipath propagation, urban canyons, dense foliage, or electronic jamming. This can lead to inaccurate positioning. GPS-Denied Environment: An environment in which GPS signals are unavailable or blocked due to various reasons such as being inside a building, underwater, in a cave, or due to intentional signal jamming. Human Pose Estimation (HPE): A technique in computer vision that predicts the pose or configuration of the human body, often represented by a set of key body joint positions. It can be done in 2D or 3D, and can function in real time, enabling the tracking of body movements. Inertial Measurement Unit (IMU): A device that measures and reports on a craft's velocity, orientation, and gravitational forces, using a combination of accelerometers and gyroscopes, sometimes also magnetometers. Commonly used for navigation, stabilisation, and correction of GPS data. Keyframe Selection: In computer vision, a process where certain frames are selected from a sequence of images based on a certain criteria. Keyframes often represent significant changes in the scene or motion, and help to reduce computational load by focusing on these selected frames. Key Points/Pairs: In computer vision, key points refer to distinctive locations in the image such as corners, edges or blobs. These are used as a reference system to describe objects. Key pairs refer to corresponding key points between different images. Light Detection and Ranging (LIDAR): A remote sensing method that uses light in the form of a pulsed laser to measure variable distances to the Earth. These light pulses—combined with other data recorded by the airborne system— generate precise, three-dimensional information about the shape of the Earth and its surface characteristics. Object Occlusion: In computer vision, occlusion refers to the event when an object, part of an object, or objects, are hidden from view. This can happen due to their position relative to the viewer or to other objects blocking the line of sight. Odometry Sensor: A sensor used to estimate change in position over time (odometry). Common examples are wheel encoders, which measure wheel rotation, and inertial measurement units (IMUs), which measure linear and angular acceleration. Optimisation: A process or methodology of making something as fully perfect, functional, or effective as possible. In computer science, optimisation often refers to choosing the best element from some set of available alternatives. Relocalisation: The ability of a system to recognise a previously visited location and accurately determine its position within a pre-established map or model. This is a key capability in systems like SLAM (Simultaneous Localisation and Mapping). Rigid Pose Estimation (RPE): The process of estimating the position (3D translation) and orientation (3D rotation) of a rigid object with respect to a certain coordinate system. The "rigid" part refers to the assumption that the object does not deform between different views. Robot Drift: A common problem in robot navigation where small errors in movement estimation accumulate over time, causing the robot's perceived position to drift away from its true position. Simultaneous Localisation and Mapping (SLAM): A computational problem in robotics and AI where a device needs to build or update a map of an unknown environment while simultaneously keeping track of its location within this environment. Sensor Fusion Model: A technique where data from several different sensors are combined to compute something more than could be determined by any one sensor alone. An example is combining data from a camera and a LIDAR sensor to improve object detection in an autonomous vehicle. Visual Simultaneous Localisation and Mapping (vSLAM): A variant of SLAM that uses visual data from cameras as the primary sensor to create a map of the environment while simultaneously tracking the camera's location in that environment. - Initialisation: The first stage of vSLAM, where the initial camera pose (position and orientation) and the structure of the surrounding environment are estimated. This usually involves estimating the relative motion of the camera between two frames and using this to triangulate the position of the observed keypoints. - Local Mapping: The process of creating a detailed map of the immediate surroundings or the part of the environment currently being observed by the robot. This map is updated continuously as the robot moves and observes new features. - Loop Closure: This refers to the situation when the robot returns to a place it has already visited. Recognising this, the robot can correct errors that have accumulated over time in its map and pose estimate. It often involves matching the current view with a previous one and adjusting the map for consistency. - Relocalisation: The capability of the system to recover its pose (location and orientation) after being lost, usually due to tracking failure or being initialised in a previously mapped area. The system matches the current observations with the existing map to determine its location. - Tracking: The process of locating the robot's pose in real-time as it moves through the environment. It involves identifying and following keypoints from frame to frame to estimate the camera's motion. The tracking quality is crucial for the performance of the vSLAM system.