multi-sided box/bucket

In the last post we looked at creating a four sided box that could be used as a bucket or planter box. The idea is to build a wooden cask or barrel by eventually increasing the number of sides or staves required to help the cask stay together and hold water without using glue. We build two versions. The first has straight sides and creates a 4x4x4 cube. The second is a little more complex and creates the same cube but reduces the bottom of the cube to 3 inches.

4_4_wood_planter 4_3_wood_planter

What would it take to make a six sided box of the same design? If we go to our on-line angle calculator we see that we need to cut the angle at 30 degrees rather than 45 degrees. The big question becomes how wide each side or stave should be.

six_sided

We want the entire piece to fit into a 4 inch square so we need to use geometry to calculate the length of each board. Two things that we know from the design are that three of these sides need to fit into four inches. This gives us The total width of the three sides combined. We also know that the total height of the three sides combined needs to fit into two inches. From this we can calculate the length of each board.

IMG_1306

There are a few things to note. First the middle piece is parallel to the centerline. From this we can create a right triangle from the inside edge cut. We know that the thickness of the wood is 3/4 of an inch so we know one side of the triangle. We also know the angle of the resulting cut to be 60 degrees because we move the 30 degree part with the table saw. Using an online geometry assistant we can calculate the length of the wood remaining from the cut. If we draw a straight line at 90 degrees to the centerline we know the angle is 30 degrees and the height of the triangle is two inches. From this we know the angle and adjacent and want to calculate the hypotenuse.

adjacent-opposite-hypotenuse

We can calculate the Hypotenuse with one of the following equations:

SOH…
Sine: sin(θ) = Opposite / Hypotenuse
…CAH…
Cosine: cos(θ) = Adjacent / Hypotenuse
…TOA
Tangent: tan(θ) = Opposite / Adjacent

H = A / cos(angle). We know that A is 2 inches and the angle is 30 degrees. It is important to realize that Excel uses radians so the outer length of the wood (hypotenuse) = A / cos (angle / 180 * pi()) = 2 / cos(30/180*pi()) = 2.3094. This is perfect because we can use this to cut the width of our board. We need to cut it 4 inches high with a 90 degree cut. We then reset the table saw blade to 30 degrees and cut one edge then set the blade guide to make the second cut 2.31 inches wide. The outer edge to outer edge will be 2.31 inches long. The inner edge will be (2 – 3/4) / cos(30/180*pi()) or 1.443 inches.

six_sided_box

The next exercise is to build the tapered bucket where the top is four inches and the bottom is three inches.

six_sided_tapered_box

The angle that we used for the four sided box translates here as well. The top of the box ends up being 2.31 inches. The bottom of the box can be calculated by using the 1.5 inch centerline with a 3/4 inch thick wood at a 30 degree angle. This calculates out the width of the bottom of the taper to be 1.732 inches. Using the on-line calculator the blade tilt should be 29.7 degrees. This results in a cut of six pieces that should fit together.

IMG_1308 IMG_1307

The resulting physical boxes look like

IMG_1309

Note that we use blue painters tape to hold the pieces together. We are close to being a round so a metal band might or might not hold the box together. We are getting close. Next up, an eight sided box is the next project. We might need to increase the size of the box because the width of each board is getting a little small when you try to cut it with a table saw. We will probably try to fit this into a 8 inch square because it basically just doubles all of the measurements.

real math – real wood

Something that peaked my interest during the holiday break was Bourbon and Whiskey. I was driving back from Austin one day and saw a sign for Bone Spirits Distillery in Smithville, Tx. I stopped and was interested in a local distiller using local corn to produce whiskey, gin, and bourbon. I talked to the owner and he explained how everything is local with the exception of the Rye because you can’t get good sweet rye grass in Texas. The conversation spurred a thought in my head. If we are starting to get local distillers who are producing white whiskey there might be a need for building or buying an aging cask to blend my own bourbon.

There appear to be a bunch of casks available through Amazon or others (Barrels Online, The Barrel Mill, and Deep South Barrels [Pearland, Tx]) for $50-$150 depending upon the size (1 liter – 20 liter). The idea is that you purchase an oak cask that has been charred on the inside and you fill it with the blend that you want. The char of the oak adds flavor and color to the white whiskey to create a blend that you want. front

The idea of creating my own blend appealed to me so I ordered a 1 liter cask. The jury is out on how well it will work. The idea appealed to me so much that I wanted to figure out what it would take to create a cask/keg on my own. I started studying the woodworking details and skills needed to build an aging barrel or smaller cask. It turns out that this is a century (literally century) old process. The early woodworkers were called coopers because they hand cut the wood with hand tools to create barrels. I wanted to see how difficult it would be to build a barrel using modern tools like a table saw, band saw, welder, and other tools.

My initial research shows that a barrel consists of a number of boards that are shaped narrow at one end, fat in the middle, and narrow at the other end. A metal band is placed around the narrow end to pull the wood together and help make it water tight. The construction does not require glue or any adhesive, just wood, moisture, and metal bands to keep everything together. These boards (called staves) have a very predefined configuration so that the fat ends fit together and allow the barrel to roll easily since they are relatively large and heavy. The narrow ends pinch together with a metal band and are pushed out with a circular board that act as the ends of the barrel. The cuts seem relatively simple. download (1) download

I wanted to figure out how complex the cuts were so I started with something a little simpler. Given that a barrel or cask is wide in the middle and symmetric getting smaller at both ends I should be able to build half of this and test fitting everything together before doing a full barrel. Surprisingly this is the design of a wooden bucket or wooden planter. There are plenty of samples and references for wooden buckets.

4_side_bucket oak_bucket

I wanted to start with something simple and work my way up. The easiest was to build a four sided “planter box” that was 4 inches wide at the top and 4 inches wide at the bottom. This would create a simple 4 inch by 4 inch by 4 inch box. I used cedar to create this since it is inexpensive wood and could be used as a planter box. The cuts were simple. The first cut is to get a piece of wood that is wider than four inches and cut four of them to four inches in length. The cut is a 90 degree cut to create the height of the bucket. The next two cuts are done at 45 degrees to create the sides. The first cut creates the reference width. The second cut is done so that the longest edge of the cut is four inches long. The resulting board is four inches tall with flat cuts on each end. The sides are cut at 45 degrees to allow you to fit each side together to form a box. 45_degree_cut

The resulting box looks very simple but does require glue to keep it together. A band around the box will keep the bottom in but will not hold water at the top because it is difficult to get pressure on the joints with a square band across the top and bottom. 4_4_wood_planter

A more challenging cut is to do a tapered box with a 4 inch top and 3 inch bottom. The design is a little more elegant and visually appealing. Again, this design requires glue to fit together but does allow us to practice angle cuts and take the next step in creating more staves or sides to round the bucket more. 4_3_wood_planter

The math behind this construction is a little more complex but not that much more. If we look at each piece of wood we will first cut four 4 inch pieces of wood to get the 4 inch height. We then draw a line from the top corner to half inches inside to get a three inch bottom. You can calculate the cuts required using geometry or cheat and use an online cutting guide. We do have to use a little math to use the online guide. It asks for the number of sides and angle. To get the angle we need to figure out what a 4 inch top and 3 inch bottom angle generates. We can do this because we know the adjacent height (4 inches) and opposite width (1/2 inches) of the triangle that we are trying to create. From this we can calculate the angle as tan(angle) = opposite / adjacent. This correlates to angle = arctan(0.5/4). In Excel this would be =atan(0.5/4) which yields 0.12435. Unfortunately this returns a value in radians which means nothing on a table saw. We need to multiply this by 180 and divide by pi. In Excel this would be = atan(0.5/4)*180/pi(). This yields 7.125 degrees. Note that changing the height of the bucket will change the angle. If we plug in 4 sides and 7.125 degrees into the jansson.us N-sided box calculator we get a blade tilt of 44.6 degrees. This is difficult to measure but if we set the blade tilt to a notch short of 45 degrees it should work.

The two resulting boxes came out well using cedar. I am using blue painters tape to hold the sides together. The intent of this prototype is to get a basic shape and make sure when we go to more sided boxes we fit within the same space. Our next test will be a six sided box of the same dimensions.

IMG_1304

Amazon Echo – apps I like

Having played with the Amazon Echo a few days, there are a few things that I like and don’t like.

  1. The natural language interpretation is relatively good. I remember years ago trying to use Dragon Naturally Speaking and realizing that I could type better than it could take dictation. I think that the Alexa app can understand what I am saying 90% of the time and act appropriately.
  2. I like the open platform concept where people can submit their own apps and get them released. This is a new technology and innovation and new apps are being added on a regular basis. I like the fact that you can add your own and not have to go through Amazon to get it approved for your use. You also don’t have to buy a developers license to do development. This was my biggest frustration with the Apple iPhone development platform. Initially if I wanted to develop something for me, I had to pay $100 annually and have Apple review it.
  3. Some of the apps that I like
    1. a home grown app that states days till and event. I hacked together an app based on the color example in the Alexa Skills Kit and used Lambda to store the code and the developers console to store the utterances. The app is currently very simple where you start it and it lists days till three dates. You can ask for each individual date by saying keywords like graduation and college and it will list days till classes start and days till graduation.
    2. the LiveStream connection. I listen to NRP every morning to get the news. I like the “daily update” that you can request and get bits of news from different sources. The news read by a newscaster is much better than the computer voice reading the news so the blend of NRP, ESPN, and other news sources mixed in helps. It is a little frustrating that you can’t mix in your own sources and the app gives you a pre-defined list but it works for me right now.
    3. the Jeopardy Skill. This is an interesting app in that it will ask you six questions on a daily basis. I like the brain teasers and this is almost like doing a small crossword puzzle on a daily basis. I could see setting something like this up and helping kids with homework. It looks like the questions could come from DynamoDB or be hard coded into Lamdba. The question and answer appears somewhat natural and I have not stumbled across syntax or phrasing issues with my answers. The natural language seems somewhat flexible and is forgiving enough to interpret your answers.
    4. The Kindle Book Reader. I tried this today to see how well it works. I can see it being something that I would use randomly. If I am doing something with my hands and need a distraction other than music it might work. The computer voice is a little difficult to listen to when compared to an Audible Book but I am a bit cheap and don’t want to pay the big bucks for audio books.
    5. Music. I love the variety of music that can be played. It is nice being able to request Pandora or an artist specifically. I have gotten addicted to Pandora and it is nice that it is integrated into the Echo. Being able to provide feedback with like and dislike also helps train the stations of interest. I have my Echo in the kitchen so it is nice to work on dinner or something special while listening to music.
  4. Some of the things that are a little frustrating are
    1. Having the device tethered to a power cord and wifi network. It would be much nicer if it were portable or an app on my iPhone. I randomly use Siri but the app diversity on the Echo make it more than a query to get answers from the Internet
    2. Speaking of the internet, simple Siri queries work much better than Alexa queries. The integration with Google, Bing, Yahoo, random search engine is not as good as Siri. Yes you can ask things like how many pints in a gallon and it will answer. It typically does good at looking things up on wikipedia but the natural language interface with Siri is so much better. This is one area that I am looking forward to improvement in.
    3. The process to launch an app is cumbersome at best. Some terms get overloaded. For example when I was first trying to load the Jeopardy app I kept getting the song Jeopardy loaded and playing. It took a while to figure out what to say and realize that there was a delay in enabling the app and having it work on the Echo.
    4. The user interface to enable a skill is a little klunky. My first thought was a great add-on to the Echo would be a touch screen that can sit near it. The card interfaces that are displayed to your phone app can be displayed to a wifi-enabled touchscreen. For example, if I ask about the score of a sporting event it would be nice to get the box score and stats of the game displayed on a screen. This is currently done through a card interface to a phone/computer but having it displayed to a Raspberry Pi or BeagleBone touch screen device would be a great add on. I can see something like this being offered and integrated into a car (once the Echo/Alexa app can be made mobile)
    5. The name Alexa. It would be really nice if you could change the device wake word. I use the word Amazon way too much and don’t want to use that word. It would be nice if you could address it as “Hal” or “Computer” as is done in 2001 or Star Trek. I would even settle for “R2D2” or “C3PO” because, hey they were just a mobile Echo on steroids. That is where we are going right?
  5. Things I look forward to
    1. Using the Echo as the base station playing on other bluetooth speakers and not using my iPad/iPhone to play. I would rather use the Echo as a player and the speakers around the house to get sound where I want it
    2. Mobility. Mobility. Mobility
    3. Apps that can integrate with my work calendar or calendar on my phone. My gmail calendar is nice but it is not where I store what I do on a daily basis.
    4. Integration with other devices like my set-top box so that I can set recordings and list things that have been recorded.

Overall I am happy with my experience. My family has not hated it yet and my relatives have enjoyed trying to stump the device with commands.

Amazon Echo date skill (app)

In this tutorial we will develop a simple Alexa skill using the development tools and Lambda services. We want to build a simple application that looks at the current date and says how many days until a target that we define. The configuration is static and self sustained and does not require on any other services.

 

First, we want to log into https://developer.amazon.com

Screen Shot 2016-01-04 at 10.35.12 PM

From here, we click on the Alexa link to get access to the development console.

Screen Shot 2016-01-04 at 10.36.37 PM

Click on the DEVELOPER CONSOLE button at the top right of the screen. This will take you to an authentication screen (assuming that you have signed up for the development tools for your account).

Screen Shot 2016-01-04 at 10.37.37 PM

This takes you to the developer console.

Screen Shot 2016-01-04 at 10.39.01 PM

Click on the Apps & Services button at the top left.

Screen Shot 2016-01-04 at 10.40.17 PM

Click on Alexa to get the Alexa development console.

Screen Shot 2016-01-04 at 10.41.18 PM

Click on the Alexa Skills Kit to see the skills that you have defined. It is important to note that these skills are available for the account and only the account that you logged in as. When you setup your Amazon Echo you associate an account with the echo. This linkage allows you to test these apps (called skills) on your Echo before releasing to Amazon for wide distribution.

 

When you click on the Skill Kit you see a list of skills that you have defined. From here you can create new skills and edit old ones. In this example we have four skills that we have developed; eagle,  English premier league soccer,  introSkill, and pat.

Screen Shot 2016-01-04 at 10.43.20 PM

If we click on the edit button to the right of eagle, we can look at how we have defined the eagle skill.

 

In this example, we name the skill eagle and it is invoked by saying “Alexa, load eagle”.  We also link this service to a Lamda service identified by the Amazon Resource Name arn:aws:lambda:us-east-1:288744954497:function:KenEagle

Screen Shot 2016-01-04 at 10.46.24 PM

The second thing that we need to define is the instances, slots, and utterances for this skill. This is done by clicking on the Interaction Model on the left menu system. Note that we have two things that this skill will report. The first is the number of days until Ken’s 18th birthday which is the last day that he can get his Boy Scout Eagle Ranking. The second is the number of days until he graduates from high school. For both of these the invocation of the skill announces the number of days until both events. If you ask for eagle, it just reports the number of days until his 18th birthday. If you ask for graduation, it just reports the number of days until his graduation ceremony.

Screen Shot 2016-01-04 at 10.48.43 PM

Note that we have two subroutine calls that are made; EagleIntent and GraduationIntent. These are two routines that are defined in the Lamda service.

 

The Test tab allows us to enter the text translation of an utterance and see how the system responds. Note that we get not only what will be said with the outputSpeech but also the card that is displayed in the app that runs on your phone or tablet.  In this example we used the utterance “eagle” to see what it responds with.

Screen Shot 2016-01-04 at 10.54.03 PM

In the Description tab we look at a simple and longer description of the skill that will be used when you publish this service. This information is not critical if you are self publishing and have no intent to share this skill with others.

Screen Shot 2016-01-04 at 10.56.52 PM

We also include a couple of icons for a small and large logo associated with this skill. We can also define more information that Amazon needs to figure out if this skill should be shared with others and how it might work before they certify the skill.

Screen Shot 2016-01-04 at 10.58.53 PM

It is important to note that the last screen is not needed if you have no intentions of publishing your apps and only want to run it locally on your personal Echo.

The second part of the code is the Lambda Service. The Lambda Service is managed from the aws console, https://console.aws.amazon.com.

Screen Shot 2016-01-04 at 11.01.10 PM

Click on the Lambda console.

Screen Shot 2016-01-04 at 11.02.42 PM

In this console, we see Lambda functions that we have created. We can click on the radio button to the left of the name and perform a variety of actions on the function. One is to look up the ARN which we needed for the Alexa Skill Kit.

Screen Shot 2016-01-04 at 11.04.22 PM

Clicking on the Show ARN gives us the unique identifier that we need.

Screen Shot 2016-01-04 at 11.04.37 PM

Clicking on the Test Function pull down allows us to select a standard test to run our code if we want to. This is the easiest way to test the initial invocation routine that is run if you launch your skill without any invocations or utterances initially. For example, if I say “Alexa, load eagle” it will run the Alexa Start Session routine which corresponds to the onLaunch() function.

Screen Shot 2016-01-04 at 11.06.02 PM

There are five things that you can do with this user interface. The first is edit your code for this skill. We will dive into the code in a bit but the initial screen does show you the inline code that you create.

Screen Shot 2016-01-04 at 11.10.41 PM

The second option that we have on this screen is the Configuration options. Here we define that the system that we are defining is invoked with index.handler so we need to define a handler in our code. We are also associating a user role to this function allowing it to run as a lambda routine on the Amazon Lambda Services. We select the lambda_basic_execution role.

Screen Shot 2016-01-04 at 11.12.19 PM

The Event sources associates how the function is called. We Add an event source and select the Alexa Skills Kit to invoke this Lambda function.

Screen Shot 2016-01-04 at 11.14.44 PM

We don’t really use the API endpoints or Monitoring for this application. We get 1,000,000 requests a day for our service. Based on what we are doing (on one device) we should never get that many invocations and are not calling this service from other services.

 

In the code section, the first thing that we do is define a handler

// Route the incoming request based on type (LaunchRequest, IntentRequest,

// etc.) The JSON body of the request is provided in the event parameter.

exports.handler = function (event, context) {

    try {

        console.log(“event.session.application.applicationId=” + event.session.application.applicationId);

        /**

         * Uncomment this if statement and populate with your skill’s application ID to

         * prevent someone else from configuring a skill that sends requests to this function.

         */

        /*

        if (event.session.application.applicationId !== “amzn1.echo-sdk-ams.app.[unique-value-here]”) {

             context.fail(“Invalid Application ID”);

        }

        */

        if (event.session.new) {

            onSessionStarted({requestId: event.request.requestId}, event.session);

        }

        if (event.request.type === “LaunchRequest”) {

            onLaunch(event.request,

                event.session,

                function callback(sessionAttributes, speechletResponse) {

                    context.succeed(buildResponse(sessionAttributes, speechletResponse));

                });

        } else if (event.request.type === “IntentRequest”) {

            onIntent(event.request,

                event.session,

                function callback(sessionAttributes, speechletResponse) {

                    context.succeed(buildResponse(sessionAttributes, speechletResponse));

                });

        } else if (event.request.type === “SessionEndedRequest”) {

            onSessionEnded(event.request, event.session);

            context.succeed();

        }

    } catch (e) {

        context.fail(“Exception: ” + e);

    }

};

All of this code is auto generated by the sample skill kit development. The important thing to note in this section is that there are three functions; LaunchRequest, IntentRequest, and SessionEndedRequest. The LaunchRequest calls onLaunch which we must define later in the code. The IntentRequest calls onIntent which parses the utterances mappings to intances which we defined earlier. The last section is SessionEndedRequest which calls onSessionEnded. This cleans up any locks or variables that we have created and gets everything ready for the next instance creation.

/**

* Called when the session starts.

*/

function onSessionStarted(sessionStartedRequest, session) {

console.log(“onSessionStarted requestId=” + sessionStartedRequest.requestId +

“, sessionId=” + session.sessionId);

}

The onSessionStarted basically only logs the fact that the skill was called. We don’t really do more than logging when things are created.

/**

* Called when the user launches the skill without specifying what they want.

*/

function onLaunch(launchRequest, session, callback) {

console.log(“onLaunch requestId=” + launchRequest.requestId +

“, sessionId=” + session.sessionId);

 

// Dispatch to your skill’s launch.

getWelcomeResponse(callback);

}

onLaunch is called if the user launches the skill with no utterances. This then logs the session creation and calls the getWelcomeResponse function. We could include the getWelcomeResponse code in this function or ask for an utterance before doing anything.

/**

* Called when the user specifies an intent for this skill.

*/

function onIntent(intentRequest, session, callback) {

console.log(“onIntent requestId=” + intentRequest.requestId +

“, sessionId=” + session.sessionId);

 

var intent = intentRequest.intent,

intentName = intentRequest.intent.name;

 

// Dispatch to your skill’s intent handlers

if (“EagleIntent” === intentName) {

EagleSession(intent, session, callback);

} else if (“GraduationIntent” === intentName) {

GraduationSession(intent, session, callback);

} else if (“AMAZON.HelpIntent” === intentName) {

getWelcomeResponse(callback);

} else {

throw “Invalid intent”;

}

}

The onIntent function basically maps the utterances that we defined in the Alexa Skills Kit and directs the utterance to a subroutine call. Remember that we created two instances and two utterances. “eagle” calls the “EagleIntent” and “graduation” calls the “GraduationIntent”. These are defined in the Interaction Model section and are implemented in this code. We also have a default AMAZON.HelpIntent if you ask for help. If you say anything else you get an error with an exception of “invalid intent”.

/**

* Called when the user ends the session.

* Is not called when the skill returns shouldEndSession=true.

*/

function onSessionEnded(sessionEndedRequest, session) {

console.log(“onSessionEnded requestId=” + sessionEndedRequest.requestId +

“, sessionId=” + session.sessionId);

// Add cleanup logic here

}

The onSEssionEnded does not need to clean up anything so it only logs that the skill exits for debugging purposes.

// ————— Functions that control the skill’s behavior ———————–

 

function getWelcomeResponse(callback) {

// If we wanted to initialize the session to have some attributes we could add those here.

var sessionAttributes = {};

var cardTitle = “Welcome”;

var today = new Date();

var months = new Array(‘January’, ‘February’, ‘March’, ‘April’, ‘May’, ‘June’, ‘July’, ‘August’, ‘September’, ‘October’, ‘November’, ‘December’)

var month = months[today.getMonth()];

var day = today.getDate();

var target = new Date(2016,02,24,0,0);

var postfix = “th”;

//    var endd = Date.parseExact(end, “2016-03-24”);

var diff = new Date(target – today);

var days = Math.round(diff/1000/60/60/24);

if (day == 1) { postfix = “st”; }

else if (day == 2) { postfix = “nd”; }

else if (day == 3) { postfix = “rd”; }

 

var graduation = new Date(2016,4,28,0,0);

var diff2 = new Date(graduation – today);

var speechOutput = “Ken only has ” + days + ” days left to finish his Eagle and ” + Math.round(diff2/1000/60/60/24) + ” days until graduation”;

//    var speechOutput = “Ken only has ” + days + ” days left to finish his Eagle”;

// If the user either does not reply to the welcome message or says something that is not

// understood, they will be prompted again with this text.

var repromptText = “Today’s date is ” + month + ” ” + day + postfix;

var shouldEndSession = false;

 

callback(sessionAttributes,

buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));

}

The getWelcomeResponse function is called from the help intent or onLaunch functions. This is the meat of the application. We define a few things like session attributes that can be passed to other functions and the card information for the app that runs on the phone or tablet complementing the Amazon Echo verbal interface.

 

The intent of this code is to print out the number of days between today and March 24th and today and May 28th. These two days are selected because the 24th is Ken’s birthday and the 28th is his graduation day.  We first define what today is by calling the Java function Date(). We want to convert this date from the default date format into something that can be spoken. We first define an array of months that translates the numeric months (0..11) to words (January…December). We also pull out the day so that we can say that today is January 4th. The “th” is postfixed either at the 1st, 2nd, 3rd, or Xth for all other numbers.

 

We also define the target date as March 24th, 2016 with the Date(2016,2,24,0,0) function call. Note that we pass in the year 2016 but pass in 2 for the month. The months start with 0 for January so March is a 2 in this representation. We next pass in 24 for the day of the month. The zeroes represent hours and minutes. Since we only care about days, we zero out these values.

 

Once we know when our target is as well as today’s date we can take the difference of these two days and round down the difference to days. The actual date format is in milliseconds so we need to convert the delta into days. This is done by dividing by milliseconds (to get seconds), by seconds in a minute (to get minutes), by minutes in an hour (to get hours), and by hours in a day (to get days). The Math.round(diff/1000/60/60/24) performs this function and returns days difference between our target and today.

 

We also look at our second target, graduation, and calculate the diff2 for days till graduation from today.

 

The speechOutput definition is what will be said by the Amazon Echo. It is important to note that this should be natural speech and not raw numbers. We could say that “Today’s date is “ + today but this will be confusing to the person listening to the Echo. By converting this to natural language 01-12-2016 translates into January 12th.

 

The repromptText is what is used if the routine is recalled or you ask Alexa to repeat what was said. It is typically a shorthand for what was initially said. In our example the getWelcomeResponse by default sayd the number of days until Ken’s 18th birthday as well the number of days until his graduation.

 

The shouldEndSession variable is either true or false if you want to exit the skill after this function is executed. In this example, we do not want to exit after the getWelcomeResponse so we set the variable to false.

 

The last thing that we do is call buildSpeechletResponse which creates the audio output from the speechOutput and the application output with the cardTitle. If the shouldEndSession is true, the skill exits. If it is false, we wait for an utterance to call an intent.

function EagleSession(intent, session, callback) {

var sessionAttributes = {};

var cardTitle = “Welcome”;

var today = new Date();

var months = new Array(‘January’, ‘February’, ‘March’, ‘April’, ‘May’, ‘June’, ‘July’, ‘August’, ‘September’, ‘October’, ‘November’, ‘December’)

var month = months[today.getMonth()];

var day = today.getDate();

var target = new Date(2016,02,24,0,0);

var postfix = “th”;

//    var endd = Date.parseExact(end, “2016-03-24”);

var diff = new Date(target – today);

var days = Math.round(diff/1000/60/60/24);

if (day == 1) { postfix = “st”; }

else if (day == 2) { postfix = “nd”; }

else if (day == 3) { postfix = “rd”; }

 

var graduation = new Date(2016,4,28,0,0);

var diff2 = new Date(graduation – today);

var speechOutput = “Ken only has ” + days + ” days left to finish his Eagle”;

//    var speechOutput = “Ken only has ” + days + ” days left to finish his Eagle”;

// If the user either does not reply to the welcome message or says something that is not

// understood, they will be prompted again with this text.

var repromptText = “Today’s date is ” + month + ” ” + day + postfix ;

var shouldEndSession = true;

 

callback(sessionAttributes,

buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));

 

}

The EagleSession is the routine called from the EagleIntent. The key difference for this function is what is returned for the speechOutput. In this function we only report the days till his 18th birthday and not his graduation date. Note that we calculate the graduation delta but do not print it out. This was done because we basically copied the previous code and dropped it into this function changing only the speechOutput.

function GraduationSession(intent, session, callback) {

var sessionAttributes = {};

var cardTitle = “Welcome”;

var today = new Date();

var months = new Array(‘January’, ‘February’, ‘March’, ‘April’, ‘May’, ‘June’, ‘July’, ‘August’, ‘September’, ‘October’, ‘November’, ‘December’)

var month = months[today.getMonth()];

var day = today.getDate();

var target = new Date(2016,02,24,0,0);

var postfix = “th”;

//    var endd = Date.parseExact(end, “2016-03-24”);

var diff = new Date(target – today);

var days = Math.round(diff/1000/60/60/24);

if (day == 1) { postfix = “st”; }

else if (day == 2) { postfix = “nd”; }

else if (day == 3) { postfix = “rd”; }

 

var graduation = new Date(2016,4,28,0,0);

var diff2 = new Date(graduation – today);

var speechOutput = “Ken only has ” + Math.round(diff2/1000/60/60/24) + ” days left till graduation”;

//    var speechOutput = “Ken only has ” + days + ” days left to finish his Eagle”;

// If the user either does not reply to the welcome message or says something that is not

// understood, they will be prompted again with this text.

var repromptText = “Today’s date is ” + month + ” ” + day + postfix ;

var shouldEndSession = true;

 

callback(sessionAttributes,

buildSpeechletResponse(cardTitle, speechOutput, repromptText, shouldEndSession));

 

}

The GraduationSession again is the exact same code with the speechOutput changed. Note that we also add shouldEndSession set to true to exit if you ask about graduation. This was randomly done here just to test the feature if called.

function buildSpeechletResponse(title, output, repromptText, shouldEndSession) {

return {

outputSpeech: {

type: “PlainText”,

text: output

},

card: {

type: “Simple”,

title: “SessionSpeechlet – ” + title,

content: “SessionSpeechlet – ” + output

},

reprompt: {

outputSpeech: {

type: “PlainText”,

text: repromptText

}

},

shouldEndSession: shouldEndSession

};

}

The buildSpeechletResponse function builds what Alexa will say upon return from an Intent call. The code is returned as a JSON structure using the output (or speechOutput) defined by the calling routine.

 

function buildResponse(sessionAttributes, speechletResponse) {

return {

version: “1.0”,

sessionAttributes: sessionAttributes,

response: speechletResponse

};

}

The buildResponse is what is finally returned to the Echo through the Alexa Skill Kit. This is a JSON structure containing all of the data needed to create the card and speak the response through the speakers.

 

Overall this example is very simple. The only complex part of the code is defining two dates, getting today’s date, and calculating the delta between the two dates and today’s date.  I suggest that if you want to play with this code, add a third Intent. Pick a target date like July 4th and see how many days are between today and July 4th. A simple hint is that that target date should be new Date(2016,6,4,0,0). You might need to create a couple of utterances because some people say “the 4th of July” and others say “July 4th”.  If you want to get really complex, pick something like Valentines Day or Easter as your target and figure out what day that is by making a search call to a web page. If you type in “when is easter 2016” in a Google search, Google returns Sunday, March 27 embedded into a web page.  The challenge in this request is mapping a web page search to a web page answer into a Date() function call for the date returned.  Once you figure out how to translate this natural language query into a computer query you can make a generic query of how many days till….  and a generic Alexa Skill.