EngageHuman


level_2


Goal - Make Pepper focus on a specific human.

// Get a human.
val human: Human = ...

// Build the action.
val engageHuman: EngageHuman = EngageHumanBuilder.with(qiContext)
                                    .withHuman(human)
                                    .build()

// Run the action asynchronously.
engageHuman.async().run()
// Get a human.
Human human = ...;

// Build the action.
EngageHuman engageHuman = EngageHumanBuilder.with(qiContext)
                                            .withHuman(human)
                                            .build();

// Run the action asynchronously.
engageHuman.async().run();

Typical usage - Thanks to the default BasicAwareness, Pepper is already acknowledging humans around him.

However, you should use EngageHuman action to:

  • manually control which human must be engaged (based on your own engagement strategy or our recommendations),
  • fully engage with a chosen human in a one-to-one interaction,
  • know when the engaged human is lost by the robot,
  • implement a multi-engagement functionality (switch the engaged human).

How it works

Human-robot interaction can be divided in three main phases:

  • the engaging phase during which Pepper tries to attract humans or waits for those who are keen to interact with him,
  • the interaction phase during which Pepper keeps his focus on the human he is interacting with,
  • the disengaging phase when Pepper should recognize that the human wants to quit the interaction and finish it properly.

The main interactive content of the app, like verbal exchange and/or content display on the screen, should only occur during the interaction phase.

The engaging phase

level_3

To engage a Human, Pepper needs to know which human is the best target to interact with.

HumanAwareness gives a suggestion via getRecommendedHumanToEngage method:

val humanAwareness: HumanAwareness = qiContext.humanAwareness
val recommendedHuman: Human = humanAwareness.recommendedHumanToEngage
HumanAwareness humanAwareness = qiContext.getHumanAwareness();
Human recommendedHuman = humanAwareness.getRecommendedHumanToEngage();

Warning

getRecommendedHumanToEngage returns null if there is no recommended human to engage.

This engagement strategy is based on engagement intention state of a human, as well as his distance to the optimal interaction position (0.6 meters in front of the current robot position). Moreover, preference is given to humans who did not already interact with the robot.

If there are no humans who show willingness to interact with the robot, Pepper can try to approach humans and invite them to interact with him, as described in ApproachHuman.

The interaction phase

After finding a human to engage, the interaction phase starts by building the EngageHuman action:

val engageHuman: EngageHuman = EngageHumanBuilder.with(qiContext)
                                    .withHuman(recommendedHuman)
                                    .build()

engageHuman.async().run()
EngageHuman engageHuman = EngageHumanBuilder.with(qiContext)
                                            .withHuman(recommendedHuman)
                                            .build();

engageHuman.async().run();

The disengaging phase

level_3

Recognizing the disengagement cues via OnHumanIsDisengagingListener listener can be useful to either provide new content in order to re-engage the human or to execute necessary actions (e.g. request an email address) and close the interaction properly by saying goodbye, before stopping the EngageHuman action.

val engageHuman: EngageHuman = EngageHumanBuilder.with(qiContext)
    .withHuman(recommendedHuman)
    .build()

val say: Say = SayBuilder.with(qiContext)
                    .withText("Goodbye!")
                    .build()

engageHuman.addOnHumanIsDisengagingListener {
    say.run()
    engagement?.requestCancellation()
}

engagement = engageHuman.async().run()
EngageHuman engageHuman = EngageHumanBuilder.with(qiContext)
        .withHuman(recommendedHuman)
        .build();

Say say = SayBuilder.with(qiContext)
                    .withText("Goodbye!")
                    .build();

engageHuman.addOnHumanIsDisengagingListener(() -> {
    say.run();
    engagement.requestCancellation();
});

engagement = engageHuman.async().run();

Apart from human’s nonverbal behavior (e.g. shift of attention, moving away from the robot), some common goodbye phrases are also considered as disengagement cues. However, this option is only available if the robot is in listening state when using Chat action.

Use cases

Stay focused during an action

You might want Pepper to focus on a specific human when performing an action. For example, this is the case when you want Pepper to talk to a human.

val human: Human = ...

val engageHuman: EngageHuman = EngageHumanBuilder.with(qiContext)
                                    .withHuman(human)
                                    .build()
val say: Say = SayBuilder.with(qiContext)
                    .withText("Hello!")
                    .build()

engageHuman.addOnHumanIsEngagedListener { say.run() }

engageHuman.async().run()
Human human = ...;

EngageHuman engageHuman = EngageHumanBuilder.with(qiContext)
                                            .withHuman(human)
                                            .build();
Say say = SayBuilder.with(qiContext)
                    .withText("Hello!")
                    .build();

engageHuman.addOnHumanIsEngagedListener(() -> say.run()});

engageHuman.async().run();

Performance & Limitations

Engaged human exclusivity

Because engagement represents a bond with a specific human, Pepper can only engage one human at a time.

Disengagement cases

Pepper can lose the engagement bond if he does not see the engaged human anymore. In this case, the engagement action finishes. For better performance, multimodal human detection based on cameras and laser/sonar measurements, it is advised to map the environment and localize the robot, as explained in LocalizeAndMap.

Human fully engaged

EngageHuman uses all means to keep tracking the engaged human, including moving the mobile base.

However, there are two options that allow you to define constraints to restrict movements during tracking:

  • Let the autonomous abilities track humans and set its degree of freedom to DegreeOfFreedom.ROBOT_FRAME_ROTATION.
  • Use LookAt to track a specific human and set its movement policy to LookAtMovementPolicy.HEAD_ONLY.

See also