Human


level_1


What is it

A Human represents a physical person detected by the robot. Pepper is able to focus on humans and can retrieve some of their characteristics.

How to use it

Getting humans around Pepper

Get access to the humans found by Pepper:

val humanAwareness: HumanAwareness = qiContext.humanAwareness
val humansAround: List<Human> = humanAwareness.humansAround
HumanAwareness humanAwareness = qiContext.getHumanAwareness();
List<Human> humansAround = humanAwareness.getHumansAround();

Warning

getHumansAround returns an empty list if Pepper didn’t detect any human.

Getting the human engaged by Pepper

The human currently engaged by the robot is available via the getEngagedHuman method:

val humanAwareness: HumanAwareness = qiContext.humanAwareness
val engagedHuman: Human = humanAwareness.engagedHuman
HumanAwareness humanAwareness = qiContext.getHumanAwareness();
Human engagedHuman = humanAwareness.getEngagedHuman();

Warning

getEngagedHuman returns null if no human is engaged.

A human to engage can be chosen by running EngageHuman action. This is a preferred engagement strategy for one-to-one interactions where Pepper should be focused exclusively on one human. Otherwise, engaged human is defined by BasicAwareness strategy and represents the human currently tracked by the robot. In certain use cases you can rely on this default strategy, for example when Pepper just needs to acknowledge or greet humans around him, show a short promotional content, etc. In the following example each time Pepper starts tracking a new human, he will greet them.

val say: Say = SayBuilder.with(qiContext)
                    .withText("Welcome!")
                    .build()

humanAwareness.addOnEngagedHumanChangedListener { human ->
    human?.let {
        say.run()
    }
}
Say say = SayBuilder.with(qiContext)
                    .withText("Welcome!")
                    .build();

humanAwareness.addOnEngagedHumanChangedListener(human -> {
    if (human != null) {
        say.run();
    }
});

Getting the human position

The human position is available via the getHeadFrame method:

val human: Human = ...
val headFrame: Frame = human.headFrame
Human human = ...;
Frame headFrame = human.getHeadFrame();

See: Frame for more details about its usage.

Getting the human face picture

level_3

The human face picture with accompanying timestamp is available via the getFacePicture method:

val human: Human = ...
val timestampedImage: TimestampedImage = human.facePicture
val encodedImage: EncodedImage = timestampedImage.image
val imageData: ByteBuffer = encodedImage.data
val time: Long = timestampedImage.time
Human human = ...;
TimestampedImage timestampedImage = human.getFacePicture();
EncodedImage encodedImage = timestampedImage.getImage();
ByteBuffer imageData = encodedImage.getData();
Long time = timestampedImage.getTime();

The face picture and timestamp correspond to the last available image satisfying the filtering conditions:

  • Keep only images of front facing faces.
  • Discard blurry images.

Note that the byte array can be empty if there is no human face picture satisfying the filtering conditions, otherwise the last taken picture will be sent.

Pictures capture, before filtering, occurs with a frequency of 5Hz. The given image is an 8 bit encoded grayscale image with a minimum size of 25x25 pixels.

Retrieving characteristics

The Human object provides some human characteristics:

val human: Human = ...
val age: Int = human.estimatedAge.years
val gender: Gender = human.estimatedGender
val pleasureState: PleasureState = human.emotion.pleasure
val excitementState: ExcitementState = human.emotion.excitement
val smileState: SmileState = human.facialExpressions.smile
val attentionState: AttentionState = human.attention
val engagementIntentionState: EngagementIntentionState = human.engagementIntention
Human human = ...;
Integer age = human.getEstimatedAge().getYears();
Gender gender = human.getEstimatedGender();
PleasureState pleasureState = human.getEmotion().getPleasure();
ExcitementState excitementState = human.getEmotion().getExcitement();
SmileState smileState = human.getFacialExpressions().getSmile();
AttentionState attentionState = human.getAttention();
EngagementIntentionState engagementIntentionState = human.getEngagementIntention();

The available human characteristics are:

Characteristics Represented by … Based on … Comment
age an Integer facial features

Stabilization time required.

Takes minimum 1s to stabilize depending on the quality of measurements, so expect to have unknown values before.

gender Gender enum
smile state SmileState enum
mood PleasureState enum facial features, touch, speech semantics The PleasureState and ExcitementState characteristics can be combined together to determine the basic emotion of the human.
excitement state ExcitementState enum voice
attention state AttentionState enum head orientation, gaze direction

These values represent the attention of the human during the previous second.

The attention state values are given relatively to the human perspective, i.e. when the human is looking to their right (from the user’s perspective), the value should be LOOKING_RIGHT.

engagement intention state

level_3

EngagementIntentionState enum trajectory, speed, head orientation

This state describes the willingness of the human to interact with the robot.

Depending on the engagement intention of the human, various robot behaviors can be created to attract interested humans or directly start interacting with the ones who are proactively searching for an interaction.

Performance & Limitations

Characteristics Refresh rate

The age, gender, smile and engagement intention characteristics have a refresh rate of 5Hz, while the other characteristics have a refresh rate of 1Hz.

Characteristics Refresh rate
Age 5Hz
Gender
SmileState
EngagementIntentionState
PleasureState 1Hz
ExcitementState
AttentionState

See also