What AI sees and how to use it.

I’ve recently really got interested in motion AI, and how computers are used to scan the gaps between what we feed it and what we want; and draw the finishing lines themselves.

A good example of what I mean, are the countless pieces of restored footage that have used some form of AI to meticulously upscale, deblurred and smoothe out ancient film reels into 4K masterpieces.

Dennis Shiryaev implemented a neutral network to apply itself to this early 20th century footage of Japan. The AI mapped out the images frame by frame, and through a process of machine learning, was able to develop it into something that appears to have been filmed yesterday.

Through his team based in Poland, Dennis and neural.love have invested in using this kind of machine learning to create ways of recycling footage and images into new kinds of content. Some of it super freaky.

By how far does the tree grow down? And how hot does the eye of heaven burn?

A forgotten aesthetic of the machine learning era we are living in is the Google DeepDream, which was a set of algorithms that mapped movement and tried to reconstructed it using elements taken from Google Images.

It is meant to reconstruct faithfully any footage you fed it, but instead would spit out these hallucinogenic nightmares.

Try and make heads or tails of that. It’s an image that burns and scars, but in all honesty is quite beautiful and especially stark in its own right.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: