// Build the holder for the ability.
val holder: Holder = HolderBuilder.with(qiContext)
.withAutonomousAbilities(AutonomousAbilitiesType.BACKGROUND_MOVEMENT)
.build()
// Hold the ability asynchronously.
holder.async().hold()
// Release the ability asynchronously.
holder.async().release()
// Build the holder for the ability.
Holder holder = HolderBuilder.with(qiContext)
.withAutonomousAbilities(AutonomousAbilitiesType.BACKGROUND_MOVEMENT)
.build();
// Hold the ability asynchronously.
holder.async().hold();
// Release the ability asynchronously.
holder.async().release();
Autonomous abilities are robot behaviors (movements, animations, tracking) occurring:
Autonomous Ability | Type | When |
---|---|---|
BackgroundMovement | Passive | Idle time |
BasicAwareness | Reactive | Reacting to any kind of stimuli |
AutonomousBlinking | Reacting to human |
HolderBuilder.withDegreesOfFreedom
allows you to define constraints restricting the Autonomous Abilities.
For instance, DegreeOfFreedom.ROBOT_FRAME_ROTATION
prevents rotational movements
of the robot base.
// Build the holder for the degree of freedom.
val holder = HolderBuilder.with(qiContext)
.withDegreesOfFreedom(DegreeOfFreedom.ROBOT_FRAME_ROTATION)
.build()
// Hold the ability asynchronously.
holder.async().hold()
// Release the ability asynchronously.
holder.async().release()
// Build the holder for the degree of freedom.
Holder holder = HolderBuilder.with(qiContext)
.withDegreesOfFreedom(DegreeOfFreedom.ROBOT_FRAME_ROTATION)
.build();
// Hold the ability asynchronously.
holder.async().hold();
// Release the ability asynchronously.
holder.async().release();
Note
Bear in mind that these constraints are only applied to Autonomous Abilities such as Basic Awareness,
but not during EngageHuman action. However, LookAt
provides something similar, called
LookAtMovementPolicy
, while EngageHuman
does not allow such options.
For further details, see: LookAt - Setting the movement policy.
The prioritization system works as follows:
By default, the robot will make background movements (aka. idle movements) and blink. If the robot detects a stimulus, then he may move his head and body, overriding the background movements. Finally, if a high-level behavior such as a Say, an Animate or an EngageHuman is executed, this will take priority over the previous modules.
Thanks to API level mechanism, potential new autonomous abilities becoming available on the robot will not modify the current behaviour of your application: new abilities will be activated only if you rebuild with the new QiSDK version.
If I want the robot to keep a posture (e.g. at the end of an animation),
BackgroundMovement
and BasicAwareness
must be held so the robot can remain
perfectly still.
If I want to freeze screen location while speaking or listening, I should disable the body language.
Note that, when using Chat
and QiChatbot
, you can disable body language option:
Chat
to freeze while listening, andQiChatbot
to freeze while speaking.For further details, see: