Share
Alexa Custom Skill with AWS Lambda function in Java

Alexa Custom Skill with AWS Lambda function in Java


Subscribe to Youtube

In this article, I’m going to focus on the Alexa Skills Kit (ASK) and write our first custom skill “HelloWorld”.

The Amazon Echo device is not just a device that magically translates our voice into commands that it can understand and execute. Amazon has created an entire ecosystem that comprises the following:

A range of hardware devices like Amazon Echo, Echo Dot, etc.

The engine behind these devices, Alexa Voice Service is made available to manufacturers who would like to build out voice-enabled capabilities in their devices.

A Software Development kit called Alexa Skills Kit that can be used by developers to build out custom skills.

AWS Lambda is the easiest way that you can go about publishing your custom Alexa skill.

Build a Custom Alexa Skill for HelloWorld

We are going to build a Custom Alexa Skill for HelloWorld. The skill will be kept simple and it will provide information on a HelloWorld API. So we can interact with the Amazon Echo device by asking it sample queries as follows:

Alexa, ask A2Cart to say hello
Alexa, ask A2Cart to say hello world
Alexa, ask A2Cart to say hi

The above are just sample voice commands that we can ask to Amazon Echo. There can be multiple variations of it and we shall look at that in details as we build out the Alexa Skill.

Note: The Skill will use dummy data and we will not actually connect to any live API to retrieve the answer to our queries.

Alexa Custom Skill with AWS Lambda function in Java
Alexa Custom Skill with AWS Lambda function in Java

Invocation Name

Identify the command that will be used to trigger your skill. For example the Amazon Echo is woken up by the word Alexa or Amazon or Echo. So the typical interaction will go like:

Alexa, ask invocationname to Intent

In our case, we are going to go with the invocationname as A2Cart.

The Alexa Voice Service is designed in a way that allows you to be quite flexible in terms of the commands and the words/phrases around it.

For example we could interact with the same service in multiple ways as shown below:

Alexa, ask A2Cart to say hello

For more information on how you can design the Voice Interaction, check out the Voice Design Handbook.

Voice Interface Design, Intents schema and Utterances

When you design the custom Alexa skill, you need to spend some time thinking about how the user is going to interact with your service via voice. This means a few things like:

Intents Schema

The schema of user intents in JSON format.

If you are familiar with Android programming, then you will understand an Intent immediately. An Intent is the question that you are asking your Alexa Skill and which will result in the invocation of your service.

Your Skill can support one or more Intents which are usually defined by a JSON file that is aptly titled IntentSchema.json.

The file contents are shown below:

{ “intents”: [ { “intent”: “HelloWorldIntent”} ] }

Alexa skill can have one or more intents. The intents are basically defined here by their name and what we see here is the simplest possible way to define an intent since we are only going to invoke the command and are not passing any variables or values while giving the command.

The Intent Schema is very flexible and can also take in parameters which are defined in slots.

Utterances

These are what people say to interact with your skill. Type or paste in all the ways that people can invoke the intents.

So far we have finalized our Invocation Name (A2Cart) and our Intents. We now need to specify the various voice commands that would be spoken as part of the Intent. You can map to more than one utterance to make the voice command flexible.

Here is the Utterances file that we have for our two intents:

HelloWorldIntent say hello
HelloWorldIntent say hello world
HelloWorldIntent say hi

Download the Alexa Custom Skill sample code(GitHub)

It should be clear to you how we have mapped a single intent to multiple utterances. You can now speak any of the statements and the Alexa Voice Service will try and map it to the right Intent.

That’s it. Thank you for reading this post.

I want you to do something for me right now: Leave a comment !!!

Like this post? Don’t forget to share it!

Enjoy!!!

Save

3 Comments on this Post

  1. B Venkateswararao

    Hi.. I have created custom skill for Alexa to play mp3 song , i have created invocation , intent , utterances , given interface as audio , and finally given Endpoint ( rest api call that plays mp3 through java code ) and successfully built the skill , while testing through Alexa developer console i am facing an issue ” i am unable to reach requested resource ”
    for eg : when i say ” alexa open rangasthalam songs ” ( invocation name ) it is saying ” i am unable to reach requested resource ” , but when i paste the rest api call in chrome browser i am able to play that song.. help me where am i doing wrong to play through alexa console..

    Reply
  2. B Venkateswararao

    This is my build model to play mp3 file , not able to play through developer console , please assist.

    interactionModel”: {
    “languageModel”: {
    “invocationName”: “charan movie songs”,
    “intents”: [
    {
    “name”: “AMAZON.CancelIntent”,
    “samples”: []
    },
    {
    “name”: “AMAZON.HelpIntent”,
    “samples”: []
    },
    {
    “name”: “AMAZON.StopIntent”,
    “samples”: []
    },
    {
    “name”: “AMAZON.PauseIntent”,
    “samples”: []
    },
    {
    “name”: “AMAZON.ResumeIntent”,
    “samples”: []
    },
    {
    “name”: “movieSongs”,
    “slots”: [],
    “samples”: [
    “rangasthalam songs”,
    “play rangasthalam songs”
    ]
    }
    ],
    “types”: []
    }
    }
    }

    Reply
  3. hi there! disableRequestSignatureCheck,supportedApplicationIds,timestampTolerance are erroring out in the Git Project.

    Reply

Leave a Comment

*