Botium Connector for Amazon Alexa Skills API
npm install botium-connector-alexa-smapi


![license]()
This is a Botium connector for testing your Amazon Alexa Skills with the Skills Management API.
__Did you read the Botium in a Nutshell articles ? Be warned, without prior knowledge of Botium you won't be able to properly use this library!__
It can be used as any other Botium connector with all Botium Stack components:
* Botium CLI
* Botium Bindings
* Botium Box
* __Node.js and NPM__
* an __Alexa Skill__, and user account with development rights
* a __project directory__ on your workstation to hold test cases and Botium configuration
When using __Botium CLI__:
```
> npm install -g botium-cli
> npm install -g botium-connector-alexa-smapi
> botium-cli init
> botium-cli run
When using __Botium Bindings__:
``
> npm install -g botium-bindings
> npm install -g botium-connector-alexa-smapi
> botium-bindings init mocha
> npm install && npm run mocha
When using __Botium Box__:
_Already integrated into Botium Box, no setup required_
This connector includes a CLI wizard to initialize the _botium.json_ file holding your connection credentials.
_This wizard is part of Botium CLI as well._
> npx botium-connector-alexa-smapi-cli init
This wizard will guide you through the Botium Connector setup. Please follow the instructions. It involves Copy&Paste from a web browser to this terminal window.
Open the file _botium.json_ in your working directory and add other settings if required.
``
{
"botium": {
"Capabilities": {
"PROJECTNAME": "
"CONTAINERMODE": "alexa-smapi",
"ALEXA_SMAPI_API": "invocation",
"ALEXA_SMAPI_SKILLID": "..."
}
}
}
Botium setup is ready, you can begin to write your BotiumScript files.
This connector provides a CLI interface for importing the Interaction Model from your skill and convert it to BotiumScript.
* Intents and Utterances are converted to BotiumScript utterances files
* Slots are filled with meaningful samples if possible
** You can hand over the samples to use with the _--slotsamples_ switch
** For default slot types, samples are loaded automatically from the official documentation
** For custom slot types, the samples from the interaction model are used
You can either run the CLI with botium-cli (it is integrated there), or directly from this connector (see samples/cli directory for some examples):
> npx botium-connector-alexa-smapi-cli import --interactionmodel entityresolutionquizdemo.json
_Please note that a botium-core installation is required_
For getting help on the available CLI options and switches, run:
> npx botium-connector-alexa-smapi-cli import --help
Set the CONTAINERMODE capability to alexa-smapi
Either "simulation" or "invocation" to use the respective Skill Management API
* __Skill Simulation API__ handles plain text input (including intent resolution)
* __Skill Invocation API__ handles structured input (intents and slots, no intent resolution done) and is therefore harder to use than the Simulation API
See the samples directory for configuration and conversation samples.
The Alexa Skill ID
The locale used for the simulation / invocation - list of valid locales see here
The long-living refresh token. Typically, the refresh token is created with the initialization wizard (see above).
Skill Management API Url
The AWS Endpoint the Skill is linked to (only required for Skill Invocation API) - see here
When using the Invocation API, tell Botium to use a special intent and a special slot to hand over the input text (intent resolution is done by Skill itself)
When using the Invocation API, tell Botium to use a special template for the invocation request (JSON formatted).
These will add Audio and Display capabilities when set to true to the invocation request sent to the Skill Management API.
This only works with the invocation API
This will generate a new userId to send within each different convo.txt file. By default the userId is botium-core-test-user and whenbotium-core-test-user-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
generated the user will be with a randomly generated UUID.
This only works with the invocation API
If your skill contains audio player responses this will track the changes to the audio player such as the token and the playerActivityAudioPlayer.PlaybackNearlyFinished
and allow you to use intents such as and other AudioPlayer` intents and get the state back on the response.
This only works with the simulation API
Prepend this phrase to all user utterances