M Eifler has created a project called Prosthetic Memory after a serious brain injury which resulted in her losing the capabilities of her memory. Her long term memory is dramatically lowered and can only hold on to little information. Instead of being able to remember important memories of her life, she can only remember simple information such as her phone number or how to ride a bike.
Prosthetic memory has 3 components that work together to assist her in having access to her memory through this digital assistance.
A custom machine learning algorithm that acts as a bridge between the physical world she interacts with and the virtual memories that make up her prosthetic( additional) memory.
Handmade paper journals that include drawings, collages and writings are digitally registered into a database and work as triggers for memories.
Video documentation, similar to vlogging, that capture events, emotions, reactions and feelings that occur during her every day life.
When a picture from the paper journals is exposed to the AI’s camera, the network recognizes the visual trigger and gives her access to all the documented files of the day the paper artwork was created. There are cameras places all around her house that monitor her actions and can offer access to similar memories depending on the tasks she is performing. Objects and tools around her house are also tracked by the AI and she can revisit memories related to the object she is using.
I believe that Interaction design has a lot to offer to people with disabilities and use technology in the most efficient way possible to assist people with special needs. The project “Prosthetic Memory” is very inspiring and fuels a lot of creativity towards that area.
Kristina Tica draws attention to the form, limitations and advantages of a medium regarding artistic expression. The interaction between the artist and the medium that is being used for the production of an artifact is a retrograding process. The artist influences the medium and the medium influencing the artists in a constant and uninterrupted fashion. For this art installation, Kristina Tica used visual coding and AI neural networks. She emphasized on the interest that arises from trying to get behind the invisible mechanism on which the user interface of a medium influences the interaction between the artist and the artwork. With this state of mind, she experiments with the boundaries and limitation of visual programming in an attempt to discover its boundaries.
Her team created an AI neural network that collected and analyzed more than 40 thousand pictures of traditional Christian Orthodox iconography and proceeded to generate religious depictions autonomously. Kristina believes that the chaotic world of numbers in a code and all the uncountable calculations done by the AI neural network take substance in the form of those pictures. In the same way a picture communicates silently many unspoken words, the neural network communicates with pictures all the invisible numbers that stand behind it. It is worth to be mentioned, that during the presentation of the installation in physical space, both the generated artifacts and the code behind each of them will be exposed to the audience.
In my opinion, and based on my cultural background, unifying religious artifacts with artificial intelligence is simultaneously fascinating and highly controversial.
As an individual I was caught off guard when I stumbled upon this project because it made me realize that religion is one aspect of life that has yet not been subjected to any substantial forms of digitalization. We live in a world where many of the aspects of our lives are heavily influenced by technology and digitalization and humanity is in the process of actively pursuing to digitalize even more areas of life. The matter of faith and religion proposes a huge topic of analysis and a challenging task when it becomes subjective to AI and technology.
As an interaction designer it is challenging thinking of the parameters and the approaches that should be considered, or even allowed, when attempting to build an effective hypothetical digitalized interaction between humans and their faith.
The panel AI x Ecology deals with the question how technology can influence our environment and ecology. I have often asked myself how sustainable technology, internet use and especially streaming is and how we can use technology to live more sustainable. In her talk „Computational Sustainability: Computing for a Better World and a Sustainable Future“ Carla P. Gomes gives an insight into her work and explains what sustainable development means.
Animate Architecture • Build Installations • Collaborate With Performers & Machines • Code With Interfaces • Create Sonic Environments • Choreograph Light • Use Advanced Manufacturing • Reimagine Robots • Augment Bodies • Construct Virtual Realities – these are not just simple buzzwords, but much more enables us to consider space, context, systems, objects and people as potential performers to recognize the wide scope for creativity.
The Interactive Architecture Lab based at the Bartlett School of Architecture, University College London deals with exactly these topics and runs a Masters Programme in design for performance and interaction. Dr. Ruairi Glynn, the Director of this Lab, is trying to break the boundaries and to develop new kinds of practices within this Programme. The core of the Master is “the belief that the creation of spaces for performance and the creation of performance within them are symbiotic design activities”. He sees performance in interaction as a holistic design practice and gave a wide range of (student) projects at the Ars Eletronica 2020, which can also be seen as an impact and inspiration for today’s and future’s development of interaction design.
The painting “The Kiss” is not only one of the most famous paintings in Austria, but also an iconic work of art history. Even before the painter finished its masterpiece, it was purchased by the Austrian state and has been in the Belvedere Museum in Vienna ever since. Every year thousands of visitors come to see the work of art.
Creating your World to Change our Perspektive – Der Augenblick davor. Damit beschäftigte sich das diesjährige Ars Eletronica für alle u19 Teilnehmer. Bei diesem Vortrag wurden die GewinnerInnen vorgestellt, die im Folgendem aufgeführt sind. Sehr spannend ist dabei zu beobachten, dass sich viele Werke mit Themen beschäftigen, die charakteristische Züge zu der derzeitigen Covid-19 Situation aufzeigen, obwohl die Einreichung der Projekte vor der Pandemie stattgefunden hat.
To gain more insights about AI in combination with music I’ve chosen the act of Bozar which was marked at the Ars Electronica festival program with a star. „Live coding expert and drummer Dago Sondervan and multi-instrumentalist Andrew Claes team up for an experimental exploration of artificial intelligence in music performance. Armed with an arsenal of specifically developed tools and applications, the duo will train a virtual agent towards musical autonomy and realtime interaction, becoming a trio along the way“ (Palais des beaux-arts de Bruxelles – Musical creation and innovation with AI, 2020)”.
The mixture of human act and AI agents was very interesting to me, because I’ve never seen a performance like that. Starting with the concert, the music sounded very abstract and technical to me, which made it very futuristic. After that the Dago Sondermann, Andrew Claes and Dr. Frederik de Bläser gave an interview about their performance. To create the sounds the artists use a patching software where they record them and programs them within shifts of random notes – that’s the moment where AI comes to play. It gets input with images, shifting notes and messages, which triggers the AI to give outputs. So this is called a combination of generative and adaptive systems and AI. At this point Dr. Frederik de Bläser explains that AI is assisting them and learning from them as a creative partner. The role of the actor is to playing a piece of music and then the intelligent assistant adds its input which creates a feedback loop and kind of a dialog between the artist and the AI. It’s very special kind of music, which cannot be defined as melodically in my opinion because it has many layers of sound and it’s very textured, but the aim of the artists is also to make the music sound like machines. Also the more complex the AI becomes the less it is to control. At this point the whole thing becomes very discussable wether there’s a limit for AI or not. There was also a little discussion with the interviewer and the artists about this, which I found interesting because I’ve never thought about the role of AI in the music sector. But it could be a black box: surprising but also uncontrollable, because due to Bazar it’s sometimes completely wrong what AI understands. It has to have more and more information, but hard to train. You can give it more context but that makes it more difficult to handle.
In conclusion I can say that AI works good for live performances to show the functionality of it, even with the music sounds not melodic to me. I can imagine it to use it more commercial to make it more accessible to everyone, to create custom-made music and have a individual music experience like already mentioned in the interview. Although it can get hard to handle or expensive at the moment. But maybe this will change due to interests of some companies in the future, because AI is still associated as a scary and unfamiliar tool.
Sources: Palais des beaux-arts de Bruxelles – Musical creation and innovation with AI. (2020, 13.September). Bot Bop: Musical creation and innovation with AI | Concert & Talk | BOZAR x Ars Electronica [Video]. YouTube. https://www.youtube.com/watch?v=8kOKv8DQ__U