Qi SDK

Introduction

The Qi SDK allows Android developers to take advantage of Pepper’s features. Once an application has taken the control of the robot, it can make the robot perform actions, and react to its environment. In the API, the focus is given on providing high-level instructions to the robot.

See the full API Reference here.

These features are organized into several packages:

  • Focus, ensures applications have the exclusive control.
  • Actuation, provides features related to robotic movements.
  • Interaction, provides human-robot interaction features.

The Qi SDK uses libqi for communication with the robot. It provides an asynchronous API, and uses futures to track and chain remote calls.

Using the Qi SDK

The easiest way to start is to use the Qi SDK Tools for Android Studio. See Getting Started, or directly create a new project.

If you are adapting an existing project to use the Qi SDK, you can simply add the dependency to it.

Declaring qisdk dependency

In your Android top-level build.gradle, add the following Maven repository: http://android.aldebaran.com/sdk/maven. It may look like the following:

allprojects {
    repositories {
        jcenter()
        maven {
            url 'http://android.aldebaran.com/sdk/maven'
        }
    }
}

In your application’s build.gradle, add the qisdk dependency.

dependencies {
    compile 'com.aldebaran:qisdk:0.7'
}

Overview

The main concept in the Qi SDK is that objects can be contacted over the network to respond to function calls and expose signals and properties. When connected to a Qi server, only the objects registered are available. These objects are named Services, and are the first thing a Qi client will see.

Services

Most of the functionalities in the Qi SDK are provided by the Qi Services. The services currently available are the following ones:

To get a service instance, just use its static getter:

Interaction interaction = Interaction.get(this);

(in all the code samples, we consider that ``this`` refers to an Android ``Activity``)

Actions

Actions are objects that can be run on the robot.

The available actions are:

To create an action, simply pass an Android context to its constructor. Every action expose one or several methods. For instance, Say exposes run(…):

Say say = new Say(this);
say.run("Hello world!");

These methods return a Future, used to retrieve the result either synchronously or asynchronously. See the Future section.

Other objects

Some objects are neither services nor actions. They are typically used as parameters for running actions.

Like action or services, they are entities, not values. Each instance references an object on the robot, and it is not copied when it is transmitted.

These objects include:

  • Animation
  • EditableFrame
  • Frame
  • Handle
  • Human
  • PhraseSet

The way such objects are created differ from one to another.

PhraseSet can be created directly:

PhraseSet phraseSet = new PhraseSet(this, "hi", "hello");

Animation can be created from assets or resources:

Animation animation = Animation.fromAssets(this, "Element.anim");

Human and Frame, on the other hand, cannot be created directly: they are always retrieved from another object.

Frame frame = Actuation.get(this).robotFrame();

Structures

Structures are just plain Java objects.

Contrary to the objects described above, they represent values, not entities, that is to say that its internal data is copied when it is transmitted.

The available structures are:

  • ListenResult
  • Phrase
  • Quaternion
  • Transform
  • TransformTime
  • Vector3

They are created like any java object, without any context:

Phrase phrase = new Phrase("Hello world!");

Future

The Qi Future (an extension of the standard Java Future) is a central concept in the Qi SDK. Every interaction with the robot eventually returns a Future.

It allows retrieving the result synchronously or asynchronously.

Synchronously

To get the result synchronously, simply call get():

Listen listen = new Listen(this);
PhraseSet phraseSet = new PhraseSet(this, "hi", "hello");
Future<ListenResult> future = listen.run(phraseSet);
// don't do this in the UI thread!
ListenResult result = future.get();

Never call get() from the UI thread, unless isDone() has returned false, otherwise it can freeze the UI. In that case, you can still call getValue() instead, to avoid a try/catch block.

A timeouted version is also available:

ListenResult result = future.get(2, TimeUnit.SECONDS);

In case of error during the execution, get() throws an ExecutionException. In case of cancellation, it throws a CancellationException.

try {
    ListenResult result = future.get();
} catch (ExecutionException e) {
    // an exception occurred during the asynchronous execution
    Throwable cause = e.getCause();
    // …
} catch (CancellationException e) {
    // the future has been cancelled
}

Asynchronously

A mechanism is provided to chain Qi futures. Retrieving a future result asynchronously is just a particular case.

Chaining calls using then / andThen

There are two methods to chain futures: then(…) and andThen(…).

  • then(…) executes the callback when the future finished, either with a result, and error or a cancellation;
  • andThen(…) executes the callback only when the future finished with a result. (TODO: real link to reference)

Both expect a FutureFunction<Ret, Arg> as parameter.

Listen listen = new Listen(this);
PhraseSet phraseSet = new PhraseSet(this, "hi", "hello");
Future<ListenResult> listenFuture = listen.run(phraseSet);
Future<Void> sayFuture = listenFuture.andThen(new FutureFunction<Void, ListenResult>() {
    @Override
    Future<Void> execute(Future<ListenResult> future) {
        // we used andThen(), so there is a value
        ListenResult result = future.getValue();
        Phrase heardPhrase = result.getHeardPhrase();
        return new Say(MyClass.this).run(heardPhrase);
    }
});

QiFunction and QiCallback

However, FutureFunction<Ret, Arg> is cumbersome to use, because the user has the responsibility to check whether the future finishes with a result, an error or a cancellation.

Therefore, for convenience, two implementations are provided (QiCallback and QiFunction), delegating the result to 3 different methods:

  • onResult(…)
  • onError(…)
  • onCancel(…)

The QiFunction methods return a Future. Use it to chain futures.

Listen listen = new Listen(this);
PhraseSet phraseSet = new PhraseSet(this, "hi", "hello");
Future<ListenResult> listenFuture = listen.run(phraseSet);
Future<Void> sayFuture = listenFuture.then(new QiFunction<Void, ListenResult>() {
    @Override
    Future<Void> onResult(ListenResult result) {
        Phrase heardPhrase = result.getHeardPhrase();
        return new Say(MyClass.this).run(heardPhrase);
    }

    @Override
    Future<Void> onError(Throwable error) throws Exception {
        Log.e(TAG, "Error", error);
        // convert the exception
        throw new MyCustomException(error);
    }
});

The QiCallback methods return void. Prefer it when you have nothing to return.

Listen listen = new Listen(this);
PhraseSet phraseSet = new PhraseSet(this, "hi", "hello");
Future<ListenResult> listenFuture = listen.run(phraseSet);
listenFuture.then(new QiCallback<ListenResult>() {
    @Override
    void onResult(ListenResult result) {
        Phrase heardPhrase = result.getHeardPhrase();
        Log.v(TAG, "heard " + heardPhrase.getText());
    }

    @Override
    void onError(Throwable error) {
        Log.e(TAG, "An error occurred", error);
    }
});

Callbacks and UI thread

The callbacks are executed in any thread. Sometimes, we want to dispatch them in the UI thread (e.g. in order to update the UI).

For that purpose, wrap the callback with Qi.onUiThread(…):

Listen listen = new Listen(this);
PhraseSet phraseSet = new PhraseSet(this, "hi", "hello");
Future<ListenResult> listenFuture = listen.run(phraseSet);
listenFuture.andThen(Qi.onUiThread(new QiCallback<ListenResult>() {
    @Override
    void onResult(ListenResult result) {
        Log.v(TAG, "in thread " + Thread.currentThread().getName());
    }
}));

Samples

Check the Tutorials.

Supported Languages

At this time, the supported languages for voice interaction with the robot are the following:

  • English
  • Japanese
  • French
  • Chinese