Machine Learning Experiments | CORPORATE ETHOS

Machine Learning Experiments

By: | October 23, 2017
ML0

The very first lesson we learn about a computer is that it is a dumb device and does nothing on its own- it just responds to our instructions. Unlike we humans who learn from the past experience, computers need to be told what to do. If you want the computer to filter out unwanted spam emails, in the conventional data processing model, we need to create a program with specific rules for separating spam emails from the genuine ones. The shortcoming of this static model is that it does not scale with the changing data- it cannot adapt to the changing data.

So the challenge is: can we get computers learn from experience or past data? In our spam filter case, can the spam filter learn from our past email data and make it automatically adapt to the changing email landscape. The answer to this question is yes, now we can make computers learn from experience or previous data. The technology that makes this possible is called machine learning (ML), which has granted computing systems entirely new abilities. As we have dealt with the topic in the past (here and  here), we will not further delve into the details here.

muralicolThough we use a wide-range of ML-based applications in our daily life (language translation services, tagging objects/people in a picture, book recommendation, etc.), many of us still find it difficult to internalise the concept of a dumb system learning from experience. Are you anxious to get some insights into what is going on under the hood of a machine learning application? If you are brimming over with curiosity to know more about the world of ML and are waiting for some means to see how the technology actually works, take a look at the ML applications portal AI Experiments from Google. Here you can find a bunch of fun and educational applications that help you gain insights on the potential of the machine learning technology. The aim of these applications is to help you explore the nuances of the machine learning technology in a simple hands-on way. Each experiment comes with a simple explainer by its developers and this helps us get started with the experiment quickly. A few such applications are discussed here.

Teachable Machine is the latest AI experiment unveiled by Google. The experiment allows us to explore how machine learning works right inside the browser.  For this, you don’t need to know anything about machine learning models or write a single line of programming code- you need just a camera enabled computing device with a browser. The camera (screenshot below) will act as an input device through which you send visual data to train the machine learning model.

ML

As shown in the figure below, the machine learning model is a classifier that can segment the input data into one of the three classes.

The output component is where the machine responds to you. In the current model, the output is set to respond with one of the three ‘GIF’ pictures- each picture represents a class. The first ‘GIF’ is connected with the green class, the second one with purple and third  ‘GIF’ is connected with the orange class.

By default, the application provides the pictures of animals- cat, dog and bunny. However, if you want, you can change those GIF images. For example, instead of animals, you can choose the gif images of certain numbers (say 1, 2 and 4 as shown below). To change the GIF image, simply click on the image you want to change, invoke a search for the target image and replace the current one with an appropriate image.

ML2

Now, let us assume that our aim is to create an application that shows a specific image when we input a visual signal. For instance, if one puts up a finger, the model should present (in our case) the picture of the digit one (‘1’).  For this, you need to train the model with appropriate signals. In this case, show one of your fingers infront of the camera and hold down the green button for a few seconds.

ML3

Once it is trained, whenever you show a finger, the model will show the GIF image connected to the class labelled green with 100% confidence (here the gif of digit one- ‘1’).

Now, let us feed the data for the purple class.  Here, if you wish, you can raise two fingers and train the model by pressing the purple button for a while.

ml4

So, now our model understands two input signals; when you show a finger the model invokes the green class and shows image ‘1’; when you show two fingers, the model will switch over to the purple class and display the image connected to it (here, the picture ‘2’). You can show different variations of these inputs – e.g.: fingers in your friend’s hand- and see how the model responds with different confidence levels.

Follow the same procedure and train the orange class also with some visual clue.  As the GIF in our example is the image of digit 4 we will train the model using our four fingers (screenshot). So, now whenever we show four fingers, the model will display the image of the digit ‘4’.

ml5

The output is not restricted to GIF images alone. Instead of ‘GIF’ , you can switch to Speech or Sound options. If you switch to Speech, when you show up a finger, it will say  ‘Hello’ and if you raise two fingers it will utter the word ‘Awesome’. These are the default words provided by the system and you can replace them with anything- like ‘One’, ‘Two’ etc. So, the experiment lets you manipulate different parameters of a machine learning system and thereby enables you to dive deep into the topic in a hands-on fashion.

Another experiment hosted on the AI experiment portal is ‘Quick Draw‘, which quickly guesses your drawings using machine learning. Here the user has to draw things and the application has to guess it, using machine intelligence. In this regard,  ‘Quick Draw’ acts like a picture dictionary. When you draw a picture, it follows each of your steps- the strokes you make, the direction in which you move your fingers, etc.-, and comes out with a guess on your picture (based on the information collected from the examples drawn by other people).

Thing Translator

You have several objects in your home or workplace and wish to know how to say an object’s name in a different language? Check out the AI experiment, ‘Thing Translator’. Once the application is loaded on your machine, simply show the object in front of your camera- immediately the application will tell you what it is called in another language you choose.

We have just pointed out only a few experiments hosted on AI Experiments. Apart from the ones discussed here you will find several other interesting experiments on the portal.