Teacher Handbook – Elementary Schools

Unit 1

Teacher Resources 1.1

Robotics v.s. AI

In short, robotics and AI are different concepts. There are some overlaps where robots are powered by AI technologies. See the Venn diagram below that depicts the relationship between the two.

Students may confuse the two, thinking all robots are intelligent and/or that AI requires robots

[Another Venn diagram for explanation of both]

Robotics Overlapping AI
Robotic arms in the video, Vex Robots, Lego Mindstorm, Drones Self-driving cars, Cozmos, Boston Dynamics, Roomba 980 Search engines, Content recommendation, Image classifiers

So, the question becomes, what makes a robot “smart”?

In short, “smart” robots have the ability to make decisions based on what it senses so they are not just acting on a pre-programmed sequence.

For example, you can easily program a robot to move forward for a certain distance and make a ninety-degree turn. But that’s all there is. The robot repeatedly performs this script. It will not make any adjustments to its behavior based on what it senses.

A “smart” robot, on the other hand, will be able to make decisions. If you program an AI robot like Cozmo to chase an object it recognizes, such as a cube, Cozmo will be able to locate the cube and move toward it regardless of where you put the cube. In a larger sense using the example of self-driving cars, there is certainly no pre-programmed path or script of how the vehicle would act. Hence, they are “smart.”

Teacher Resources 1.2

While there are many possibilities for students to recognize facial recognition, including how Cozmo understands that he is looking at a face, the following three videos are useful. Below them is a list, albeit not exhaustive, of common technologies that use facial recognition with which elementary students may be familiar.

Other common applications of facial recognition:

  1. Drivers’ license photo lookups
  2. Police and FBI identifying people of interest
  3. Advanced ATMs using facial recognition to access banking

Teacher Resources 1.3

In explore mode, students can manually operate or “drive” Cozmo. However, his safety protocols are no longer in effect, and he can be driven off the table, causing damage. It is suggested either to A) have bumpers around the desk or B) operate Cozmo on the floor.

The following are activities a teacher may use with “driving” Cozmo.

Recognizing Objects

Students may drive Cozmo around. As they do, they will see what he sees through the “Explore” function in the Cozmo app. Students will notice he recognizes certain objects, such as faces, cubes, his charger, and “pets.” Students can make notes about what Cozmo can and cannot recognize.

Picking up Cubes

Students will see that Cozmo can pick up cubes even in explore mode. He is also able to roll them. A button presents itself on the screen when Cozmo recognizes a cube. Students can use the button to do so.

To add a layer of challenge, however, ask students to pick up the cube without the AI technology assisting. In other words, students have to manually operate Cozmo’s treads and arms to pick up the cube. Students can be prompted to note how challenging it is. This is useful when teaching object manipulation and how AI has certain limitations. Too, teachers can contrast what makes humans special with depth perception as well as opposable digits on our hands.

Obstacle Course

Students can design obstacle courses and try to navigate Cozmo through the course. Students can be given the challenge of doing so either A) by only looking at Cozmo, B) only looking through the app, or C) a combination.

The lesson to draw from this activity is that it is challenging to drive Cozmo when only looking at him and not through the app, especially given how, when he is approaching the user, the directions are reversed.

Teacher Resources 1.4

Here is are some images that have been tested for Cozmo to recognize as human faces (though students would have to put the name of the person into “Meet Cozmo”). Additionally, these are people that students might readily recognize. Teachers may print them out or provide them in some digital form.

Unit 2

Teacher Resources 2.1

Introduction

Faces are essential to our individual identifications. Facial recognition is what most of us have been relying on to visually identify others. Facial recognition in AI has grown beyond standard visual recognition in its ability to not only recognize objects but also identify the person and the real-time expressions of the face. Because of these unique abilities, ReadyAI has decided to make it a separate criterion in the WAICY competitions.

Applications

Smartphones

In recent years, facial recognition has been adopted widely in the smartphone market. Apple’s iPhone X release brought about the ability of ‘Face Unlock’ to every user. Its Face ID technology is the most secure and reliable in the market at the moment, as it can recognize a face even when a person’s appearance changes due to factors such as facial hair, makeup, scarves, hats, glasses, contact lenses, and sunglasses. Even with these variables, the facial recognition works, including in total darkness.

Security System

Facial Recognition could be used to track people through surveillance cameras. It could be used to pinpoint certain people’s location and track their activities, which is useful in criminal investigation. Actually, the US Federal Bureau of Investigation has set up a machine learning system to identify criminal suspects from images of their faces. The FBI currently has a database that includes half of US residents’ faces, which could enable them to identify a person with a security camera footage. Home security systems use similar technologies to detect faces of the people that enter homes.

Facial Recognition and Expressions in Cozmo

How does it work?

Computers use 2D and/or 3D analysis to recognize crucial points on the human faces. The points, with derived distances, lengths, widths, and angles, form patterns that computers can associate with an identity. Cozmo sees these patterns and associates the patterns with the input names.

How Does Cozmo Describe A Face?

Generally, a face has four components: two eyes, one nose, and one mouth. While not seen through the Cozmo app, users who use Calypso on a PC can see that

  • “Each eye is represented by three points, forming a triangle.
  • The nose is represented by two points, forming a line.
  • The mouth is represented by four points. These can form a quadrilateral (as in “surprised”), a triangle, a curved line, or a straight line.” (*https://www.cs.cmu.edu/~dst/Calypso/Curriculum/08/)

As a result of this, “a facial expression is represented by 12 points, or 24 numbers, since each point has both an x and a y coordinate. There is a complex mathematical function that maps these numbers into expressions like “’happy’” or “’sad.’”(*https://www.cs.cmu.edu/~dst/Calypso/Curriculum/08/) Currently, only the most powerful facial recognition software can recognize numerous emotions.

*http://www.ibtimes.com/fbi-now-has-largest-biometric-database-world-will-it-lead-more-surveillance-2345062.

For more information, see

https://www.youtube.com/watch?v=0O0otPBtbXs

https://azure.microsoft.com/en-us/services/cognitive-services/emotion/

https://www.sciencedirect.com/science/article/pii/S1877050917305264

https://blog.algorithmia.com/introduction-to-emotion-recognition/

https://www.youtube.com/watch?v=kERPE_rt9rk

Unit 3

Teacher Resources 3.1

How Does Facial Recognition Work?

Facial recognition of human faces works on a series of datapoints usually extracted from features common to humans, eyes, eyebrows, nose, and mouth. Some with darker This is not due to any implicit bias within the facial recognition camera or software itself. Rather, it has to do with lighting. Teachers, then, are encouraged to have students have clear, distinct backgrounds prior to trying to scan their faces. This may alleviate any hurt feelings by students who the hardware has a challenge recognizing.

Newer technologies obviate this problem by relying on different sorts of cameras like infrared, including Apple iPhone X, which is able to pick up faces even in the dark. Moreover, such advanced cameras also scan depth, allowing them to not be fooled by a photograph of the person’s face. To learn more about Apple’s iPhone X, click here.

That being said, how does AI distinguish one person from the next? Using those same datapoints above, measurements are taken. The distance between people’s eyes changes, as does the angles forming the triangle between their eyes and nose. Too, mouths change in size and width and shape.

*image used from: https://www.eff.org/pages/face-recognition

The picture above traces the progress from what we see to what the computer sees. For a more lengthy discussion, including false-positives and false-negatives as well as the societal implications of facial recognition, click here.

Unit 4

Teacher Resources 4.1

Alan Turing (1912 – 1954) was an important computer scientist prior to the real age of computers. Known more recently thanks to the movie, The Imitation Game, in which Turing uses a computer to crack Nazi codes during World War I, his relevance to AI today continues.

Turing proposed what we now call the Turing Test, which seeks to determine whether a computer can pass as a human in conversation. Should it be able to, Turing argued, computers could truly be called intelligent. Of course, few units can carry on much of a conversation even in 2019. Take Amazon Alexa, one of the more powerful AI units available commercially. While Alexa can ask single queries, such as, “What’s the weather like today?”, she cannot respond to a follow-up question like, “Do rainy days make you happy?” (The answer, as of publication of this manual, was, “Sorry. I don’t know that.”)

Turing’s test has continued in other forms, though, such as the CAPTCHA to determine whether a computer or a bot is navigating a website. Variations of the Turing Test can be found here.

For a video on Turing, click here. Teachers may want to share this with students as it is published by Cambridge University and is of high quality.

A more visually engaging video can be found in the Crash Course series. Click here.

Teacher Resources 4.2

What actually constitutes AI? There is no set definition, of course. The Elements of AI course published by Helsinki UNiversity and freely available online offered some of the following:

  1. “Cool things that computers can’t do”
  2. “Autonomous and adaptive systems”
  3. “Machines imitating intelligent human behavior”

A good activity with advanced students may include breaking down these definitions. For instance, don’t we need computers to do those “cool” things mentioned in number 1? Or, what does autonomous really mean in 2? Or, most humans do not engage in many of the skills AI can complete, such as data mining. So, is that really human behavior?

The point is, there is no set definition of AI presently. However, the following flowchart was proposed as a way of determining “true” AI:

*A back-of-the-envelope explainer. Image hand drew by Karen Hao from MIT Tech Review

As for how to teach this, the following chart may provide some theoretical background on this curriculum you now are reading:

*https://ai.cs.cmu.edu/about

Teacher Resources 4.3

The goal of this lesson is for students to begin linking AI skills together. As humans, facial recognition in our mind is not independent of other actions. We wave at a person we know. We smile. We may run up to the person. Similarly, AI requires skills to be linked for the AI unit to be approachable for most people. Perhaps, one might argue, that it isn’t really AI at all if it is not manifesting multiple skills at once, whatever they may be.

Thus, here are some ideas to prompt students connection between AI facial recognition and speech generation:

  1. The “Wal-Mart Greeter” – The AI units sees a person and says, “Welcome!”
  2. The Soccer Mom” – The AI unit welcomes the student home and says he has a snack ready for him or her.
  3. The Intruder Alarm – The AI unit tells someone it sees that he or she is intruding.
  4. The Bank Teller – Before a person can access his or her bank account, the AI unit needs to scan the person’s face and say something like, “Welcome to Bank Name.”
  5. The AI Classroom Monitor – The AI unit welcomes students who arrive to class, documenting who is there and who is not for the teacher.

Students should be encouraged to think of their own examples in order to spur creativity and connect AI to their everyday lives.

Unit 5

Teacher Resources 5.1

Since the AI unit does not pitch its voice, there is no true “singing” taking place. Thus, either the AI unit can be programmed to speak the lyrics of a song or it can be programmed to use its one octave range to “sing” a song.

Students may be more familiar with the idea of a toy piano, which can only play one octave. Similarly, the AI unit can only do this as well, though the length of the note can be programmed. Encouraging students to not just play with the notes themselves but the length those notes can be played, as well as sharp notes, can increase the range of the AI unit’s signing.

Teacher Resources 5.2

Teacher Resources 5.3

The following are songs that can all be done on a single octave machine, such as the AI unit:

– “Do-Re-Mi” (The Sound of Music) (C D E C E C E …).

– “Raindrops Keep Fallin’ on My Head” (E E F E D C E …).

– “Santa Claus is Coming to Town” (D E G G G …).

– “Oh! Susanna” (C D E …).

– “Three blind mice” (E D C …).

– “Twinkle, twinkle, little star” (C C G G A A G).

– “If you’re happy and you know it” (C C F F F F F F E F G).

More help can be found here.

You may encourage students to add longer pauses and longer notes in to see the effects on songs.

Unit 6

Teacher Resources 6.1

In the video, students can see not only the dexterity possible by some robots when rotating a block in its hand. Moreover, the robot is manipulating the cube in its hand to spell out words.

Students may be asked if this is AI. It is likely that this is AI, depending on the coding. If the robot follows a set track, meaning it moves from O to A because has been preprogrammed to move the cube in its hand in that series, then likely the answer is no, despite its amazing dexterity.

However, if there is no set path, and the robot hand is making decisions as it determines the letters, then yes. The robot is not necessarily navigating but it is choosing its own path of manipulating the cube, what is called “states” or “state space,” the possible set of situations that it may face (i.e. which letter is facing out).

Object manipulation is very difficult for robots. Most of the time, in order to manipulate various objects in various ways, robots need to be specially modified for the target and programmed in a certain way. Teachers may also emphasize the machine learning taking place with the robot quickly determining the least amount of turns necessary to spell the given word.

Teacher Resources 6.2

For students to be able to pick up the cubes manually, they will need to run the connected device’s “Explore” mode. Be warned! Students can drive their AI unit right off a desk in this mode.

Students then have to use the connected device, the tablet or phone, to move to the unit on the right approach angle to the cube. They also need to pick it up with the functions of the lift, which can be daunting if they do not approach at precisely the right angle.

“Explore” mode also allows them to tap the screen the AI unit sees a cube and the AI will automatically take over.

Finally, in CodeLab Sandbox and Construction Mode, 6.2 demonstrates how to manually operate their AI units and how to program the AI Unit to pick up a cube using Code Lab and Construction Lab. Specific icons, seen below, allow the programming of picking up the cube once it is seen.

Students may also be encouraged to try other forms of manipulation, such as stacking cubes, rolling cubes, and knocking stacked cubes down. Too, cubes can be used to manipulate the AI unit itself, such as flipping on its back or successfully moving from being upside down back onto its treads. While these skills all must be coded into the unit, the unit demonstrates AI in that it navigates to the pieces on the right vectors as well as reacting when it sees the objects within its view automatically. See below for some coding ideas:

  1. Moving to a cube it sees
  2. Moving to a cube that has been tapped
  3. Moving to one of two cubes that has been tapped
  4. Moving to a cube once it changes to a certain color
  5. Picking up the cube or manipulating it in other ways.

Unit 8

Teacher Resources 8.1

While students can attempt to mimic the game “Keep Away,” coding for a more traditional form of the game, often called “Monkey in the Middle,” may be found below. In the game, students must tap their cube before Cozmo gets to it. Students may test how close Cozmo can get without touching the cube.

This coding is quite rudimentary, and teachers should encourage students to play with the coding to enhance AI interaction. For instance, can students program

  • a reaction when Cozmo successfully gets to the cube?
  • Cozmo to tap the cube when he gets to it?
  • a loop of play so that the game continues until Cozmo wins?
  • varied reactions when he doesn’t get to the cube in time?
  • varying speeds of his approach to the cube so as to “fool” students?

Teacher Resources 8.2

Quick Tap

In this game, the AI unit is programmed to “quickly tap” the cube when the color changes to a user-defined color. To program the more advanced quick tap game, multiple rules are necessary, including the AI-human interaction and counting. Advanced students can be encouraged to try more advanced programming than what this challenge provides, but the vast number of students may have enough difficulty with the cube color changes and lift moving.

Unit 9

Teacher Resources 9.1

In a sequential setting, events in a program happen in order. One action leads to the next, and the completion of later events rely on the occurrence of earlier events. This is widely used in automation, especially where actions are highly scripted and few variables are included. Sandbox mode largely relies on this sort of programming, as well.

In a rule-based setting, however, the program is a set of rules. In our case, Cozmo follows all the rules that are given to him in Constructor Mode. Actions happen when and only when the condition is true. So, there is no difference if we switch the order of the rules that begin with a clear condition, such as starting when the flag is tapped. This is more suitable for smart robots, who have to deal with complex situations containing multiple variables that could affect the robots’ behavior at the same time.

Teacher Resources 9.2

Now that students have tried to be “programmed” in a sequential way, you can let them experience the difference stated in Teacher Resources 4.1 by providing them a set of rules and “run” the program several times while altering the obstacles.

Place the desk between the student who’s playing Cozmo and student A. Ask the students to write down the sequential code for navigating to student A. Then move the desk closer or farther to either side. Most likely the original sequential code will fail because it is based on how the obstacle was placed.

Now try the rule-based code:

  1. When see [Student A], move toward [Student A]
  2. When bumped an obstacle, turn left and move around it.
  3. When bumped [Student A], say [student input]

Once the “AI” student understands the rules, you can move the obstacles around and it shouldn’t be a problem for him/her.

Teacher Resources 9.3

What is necessary here is programming using the identifying cubes 1, 2, and 3 or by associating colors with the cubes in the coding.

Here is a very basic code students can begin with:

Another tip necessary for students. The AI unit must “know” where the cubes are when starting the program. Thus, if the students put the cubes directly in front of one another, the AI unit will not be able to successfully recognize the cube behind.

Unit 10

Teacher Resources 10.1

In this video, the robot does a few things. It knows its surroundings, recognizes dog feces, and picks them up. In the context of our course, the robot does object recognition, landmark-based navigation, and object manipulation. The students should know all of them by this lesson.

For more on how this robot works and the story behind the developers of it, check out this video: https://www.youtube.com/watch?v=-FBHA6_7CL4

Unit 12

Teacher Resources 12.1

The following simple code demonstrates an easily missed concept: the “repeat” function.

If the students want their AI unit to “work out,” how many times must it repeat the above coding? What must be repeated, as well? Students should only repeat the moving lift.

A more advanced program might also include emotions after, or the AI unit progressing to the next cube as if it were additional weights.