- How To Detect and Track Multiple Targets Simultaneously
- How To Edit the Properties of a Target
- Best Practices for Large Device Databases
How many targets can be recognized and tracked by a Vuforia app?
The number of targets that your app can recognize and track simultaneously will depend on the types of databases that you utilize and also the target types.
Image Targets
Vuforia supports both local and online target databases for Image Targets. These are referred to as device and cloud databases respectively, See: Comparison of Device and Cloud Databases
Device Databases
Device Databases are stored on the device with your app. You can load and unload Device Databases programmatically, and activate or deactivate them using the ObjectTracker API. Activating a loaded database enables Vuforia to detect any of the targets included in that database.
The number of targets that can be active simultaneously, across one or more loaded Device Databases, is constrained by the device context and its available resources.
Cloud Databases
Image Targets in Cloud Databases are recognized using the Cloud Recognition service. A single Cloud Database can contain up to 1 million Image Targets. When an image in your Cloud Database is recognized, it's associated Image Target is returned to your app. Cloud Recognition does not support simultaneous image tracking and Cloud Databases only support Image Targets.
Multi-targets
The same Device Database rules that apply to Image Targets also apply to Multi-Targets. Since a Multi-Target is made of up a combination of multiple Image Targets, the number of Image Targets in a Multi-Target determines its target count for the purpose of determining the Multi-Target capacity of a Device Database. For example, a cube or cuboid Multi-Target with six faces (6 Image Targets) is equivalent to six targets in a Device Database.
Object Targets
A single device database can contain up to 20 Object Targets. A maximum of 2 Object Targets can be tracked simultaneously.
Frame Markers
Frame Markers are identified by Marker IDs that range between 0 and 511 (512 values in total). No database is required to support Frame Markers because they are decoded based on their bit pattern rather than an image.
Tracking multiple targets simultaneously
Vuforia supports the tracking of multiple targets simultaneously. The maximum number of targets that can be tracked at the same time is determined by two factors.
- The number of targets that can be detected in the device camera's field of view
- The computational performance of the device.
The first factor relates to a target's physical size and the maximum distance from which a set of targets can be detected to maintain tracking. The greater the number of targets, the farther the camera needs to be from them to capture them all in its FoV. Eventually the necessary distance results in a loss of tracking because the features of the targets are not longer detectable.
The second factor is device dependent. Target tracking is a computationally intensive process. Each device has an effective limit on the number of targets that it can track simultaneously while maintaining satisfactory performance of other tasks like screen rendering. Trying to track too many targets simultaneously will result in a poor user experience, due to a lowered rendering frame rate.
This is why the Vuforia SDK limits the maximum number of simultaneously tracked image based targets to 5, and provides a API method that enables you to define a lower maximum for your app.
Vuforia.setHint( HINT_MAX_SIMULTANEOUS_IMAGE_TARGETS, how_many );
For instance, you can tell Vuforia to allow it to track up to 4 simultaneous image targets, by writing:
Vuforia.setHint( HINT_MAX_SIMULTANEOUS_IMAGE_TARGETS, 4 );
While in Unity, you can specify this request using the Max Simultaneous Image Targets property in the ARCamera prefabs settings (using the Inspector panel in the Unity Editor). You can also specify this request programmatically at run time using the following C# code:
VuforiaUnity.SetHint ( VuforiaHint.HINT_MAX_SIMULTANEOUS_IMAGE_TARGETS, 4 );
Note that this method has no affect when using the Cloud Recognition Target Finder because the Target Finder will only maintain one active target at a time.
Simultaneous tracking for Frame Markers
Frame Markers do not impose any specific limitations, as long as you can fit a certain number of them in your camera view. For example, we have successfully tested scenarios with more than 20 frame markers tracked at the same time.
Can I manually change the size in the dataset XML?
Yes, it is possible to edit the dataset XML file and change the target size manually. However, be aware that you should still take care of maintaining the original aspect ratio (width and height). For instance, if you make the width twice as large, you should also double the height.
What does the target size represent?
The target size represents the actual size of the target in 3D scene units. Do not get confused by the fact that it is a 2D size (width and height). This is because your image target is a flat rectangular 2D shape, but if you think of such a rectangle in 3D space (with some position and orientation), its width and height become actual 3D dimensions. Note that the third dimension (the thickness) is null and is not relevant simply because the target is a flat rectangle (it has no thickness).
What is the relationship between the image resolution (size in pixels) and the target size?
There is no direct relationship between these attributes. The target size represents the size of the target rectangle in 3D space, that is, in 3D scene units. The image width and height in pixels have nothing to do with the target size. For example, if your image has a resolution of 1024 x 512 pixels, this does not mean that you have to enter 1024 as target width in the target manager.
The only indirect relationship between image size (pixels) and target size is the aspect ratio, so the target width/height ratio must be the same as the image width/height ratio.
What image resolution should I use for target images?
During the resizing process, anti-aliasing is performed on the uploaded image. This action is perfectly acceptable for photos, but the results may affect the detectability of Image Targets.
To avoid experiencing the anti-aliasing impact, you can ensure that the uploaded image is at least 320 pixels in width.
Stretching and softening in the image due to a server-side scaling step results in a lower feature count and worse local contrast in the image. This may not be visible immediately, but such targets can result in poor target detection and tracking.
What image file formats are supported for Image Targets?
The file must be 8 or 24-bit PNG or JPG. A JPG file must be RGB or greyscale. Maximum image file size is 2.25MB.
Can I change my target size programmatically?
If you want to change the size of your Image Targets at run time (programmatically using script code), and you want Vuforia to take into account the new size and change the tracked target distance accordingly, you need to:
- use the ImageTarget.SetSize( Vector2 new_size ) API
- Deactivate the Dataset before changing the size
- Reactivate the Dataset after changing the size
This sample code shows how to check and set target sizes using the Vuforia API
public class TargetInfo : MonoBehaviour { void OnGUI() { StateManager sm = TrackerManager.Instance.GetStateManager(); if (GUI.Button (new Rect(50,50,200,40), "Size Up")) { ImageTracker tracker = TrackerManager.Instance.GetTracker<ImageTracker>(); foreach (DataSet ds in tracker.GetActiveDataSets()) { // Deactivate Dataset before chaging the target size tracker.DeactivateDataSet(ds); foreach (Trackable trackable in ds.GetTrackables()) { if (trackable is ImageTarget) { ImageTarget it = trackable as ImageTarget; Vector2 old_size = it.GetSize(); Vector2 new_size = new Vector2(1.5f*old_size.x, 1.5f*old_size.y); it.SetSize(new_size); } } // Reactivate dataset tracker.ActivateDataSet(ds); } } foreach (TrackableBehaviour tb in sm.GetActiveTrackableBehaviours()) { if (tb is ImageTargetBehaviour) { ImageTargetBehaviour itb = tb as ImageTargetBehaviour; float dist2cam = (itb.transform.position - Camera.main.transform.position).magnitude; ImageTarget it = itb.Trackable as ImageTarget; Vector2 size = it.GetSize(); GUI.Box (new Rect(50,100,300,40), it.Name + " - " + size.ToString() + "\nDistance to camera: " + dist2cam); } } } }
Should the target size match the physical size of the printed target?
The target size does not necessarily have to match the real (physical) size of the physical target, for example, the printed size of a paper board, sheet, or other physical support. Also, the size of the physical target can be defined in different ways depending on which unit of measure you use (cm, mm, inches, meters, etc.). For example, a 20-centimeter wide printed target can be thought of as having a width of 20 if you think of its size in centimeters, but the same target can be thought of with a size of 200 if you think of it in millimeters (1 cm = 10 mm).
However, in most cases it is convenient (and easier) to choose a unit of measure (for example, millimeters) ahead of time and then define the target size that is consistent with that choice. For example, you may know that your physical target will be printed on A4 paper, which has a width of 210 millimeters. So the easiest thing to do when creating the target is to set the width at 210 in the online Target Manager. This means that the virtual 3D scene units will practically represent millimeters when mapped to the real world.
Note that in a virtual world defined in OpenGL (in your app), a unit is just a number and does not represent any physical unit of measure. The mapping with the real world size occurs only in Augmented Reality since the target is superimposed on the real world objects.
What is the recommended physical target size for a given distance?
For tabletop, near-field, product shelf and similar scenarios, a physical printed image target should be at least 5 inches wide, or 12 cm, and of reasonable height for a good AR experience. However, the recommended size varies, based on the actual target rating and the distance to the physical image target.
In general, the higher the distance between the camera and the target, the larger your target should be. Obviously, smaller targets are suited for a close range AR experience, while larger targets are more suitable when the AR experience is expected at further distance from the target.
As an estimate, consider that a 20-30 cm wide target should be detectable up to about 2 – 3 meters distance, which is about 10 times the target size. However, be aware that this value is an empirical indication and the actual size/distance ratio may vary significantly depending on many factors, such as the following:
- The lighting conditions of your environment
- The device camera focus mode
- The target rating (star rating: 1 to 5 stars)
- The viewing angle (Is the camera facing the target directly or is the camera facing the target from a steep oblique angle?)
Why do my augmentations disappear beyond a certain distance?
This issue may be due to one of the following reasons:
- The target is too small for operating at such distance
- Suggestion: Consider increasing the size of your physical target.
- The far clipping place (in OpenGL or in the Unity camera settings) is too small and is causing the 3D augmentation models to be clipped (not rendered)
- Suggestion: Check and then possibly increase the value of your far distance clipping plane. In the OpenGL samples, this action is usually done with code that is similar to the following code:
JAVA (Android)
projectionMatrix = Tool.getProjectionGL(camCalibration, near_distance, far_distance);
or (C++, Android/iOS)
projectionMatrix = Vuforia::Tool::getProjectionGL(cameraCalibration, near_dist, far_dist);
In Unity, the near and far clipping planes can be set directly in the ARCamera inspector.