Into My Eye

Platform: webRTC, tensorflow.js, node.js

“Into My Eye” is a web-based interactive experience that creates a poetic space for users to look into the eyes of “machines” to see the world they see.

The project is realized mainly based on webRTC, tensorflow.js and node.js running on the backend: webRTC and node.js organize and maintain the server for the program, tensorflow.js runs word2Vec, a group of related models that are used to produce word embeddings. This method allows you to perform vector operations on a given set of input vectors.

Multiple users could together visit the webpage, and trigger the program by giving a simple one-word input. When received the one word as input, the program will first compare it with the category list of Google quick-draw database, if matched, it will returns a random drawing of the category; if not matched, it will start calculate the distances between the input and the quick-draw database, and return the one word “closest” to the input based on its knowledge. All the results based on the user’s input would be framed as “To me [input] is [result]”, and results based on other users’ inputs would be framed as “Others think [input] is [result]”. Therefore, the participation of multiple users at the same time could turn the experience into a collective poetry writing experience.

For long, how algorithm processes our input, or more poetically, the way machine thinks what we think, is like a “blackbox” - and this project tries to visualize the invisible “thinking” process of machines. By framing the program results not as results, but some kind of answers to our questions, the project purposely blurs the boundary between human minds and algorithms and forms a poetic conversation between the two as part of the fantasy about cyberpunk virtual world.

Into My Eye.png