We often forget our brains are machines- probably the greatest of their kind. They can learn, output information, control a hot-centre of nerves, host the ability to feel senses and think on their own. Mind you, this has taken thousands of years of evolution, but we got there eventually… well, most of us have anyways.

So after just 70 years of electronic computer development, technology and bots have become so incredibly advanced that they can now think on their own. Don’t be alarmed, I’m not talking about some I, Robot kind of stuff here where computers and robots take over the world.

My reaction when thinking about robots taking over.

I’m talking really cool machine thinking abilities, like bots learning how to read, draw and understand human hieroglyphics (or writing, to me and you).

Google (everybody’s favourite search engine/ second brain) has invested in Machine Learning. “What in the heck is that?” you’re probably thinking… long story short, Machine Learning is a is a method of data analysis that allows bots to automate answers and automatically learn and improve their information from experience. Much like that one time you took a tequila shot and learnt to never drink that type of alcohol again, kind of learning experience.

And so, after partnering with developers and specialists, comes Google’s take on Machine Learning. There’s a wide array of Machine Learning bots and experiments to explore, but this time I’ve chosen to focus on the two bots which learn from human drawing.

Quick, Draw! Is an extremely fun program that involves you thinking on your feet to draw an assigned subject in less than 20 seconds. You may be asked to draw things from a donut, to a dog or dragon. The machine will try to guess what you’re drawing based on your brush strokes. You can review the bot’s thinking at the end of your drawing to see the method behind the bot guessing your image. There’s also an insanely awesome database of over 50 million human drawings taken from the whole analysis. The aim of Quick, Draw! Is to increase the bots learning capabilities by getting more humans to test out their drawing skills. So go on, get that mouse or finger doodling.

Sketch RNN works in a similar way, except in technical terms the bot is called a Neural Net (basically, the bot has patterned software to operate like a human brain). The sketch bot will assign you an image to draw. Once you begin drawing, Sketch RNN kicks into place to suggest how you can finish drawing your image based off other people’s work. Take for example a spider. If I draw a circle, the bot will suggest that I draw 8 legs on either side of the circle to mimic legs, and maybe throw in some fangs based on previous drawings. It’s a really great way of helping the machine learn and recognise human patterns, and in return you get to learn how to draw better!

Sketch RNN’s image predictor.

You can help grow technology and create bot advancements in Machine Learning just by participating in these super fun and easy activities. In return, your brain will be exercised and challenged and you’ll learn some new drawing techniques.

So maybe human brains and computer brains aren’t that different after all. We rely on emotional and muscle memory to recall drawing things, and the machine uses data memory and analytical comparison to draw with us. This new technology could be used in schools to teach children how to draw, handwrite or learn. And the best news is the more we share and draw, the more our technological capabilities grow. So, happy sketching!

Take 2 minutes out of your day to experiment and play with the bots below!

Quick, Draw!      |      Sketch RNN