Skip to content

Designing with Extended Intelligences

27.02-01.03

DOTTOD and Ai Modmatrix

In this seminar we had the opportunity to play with some tools that Peter and Chris created as “dottod” i.e., a platform for generating images.

Investigaton about concepts of decay and hood slangs

Together with Annna and Jorge, we used the tools made available to us to investigate how artificial intelligence perceives exaggerations of concepts and how, without special cues, it recreates on a visual level, the degradation of spaces and things. We tried not to create too specific prompts in order not to influence image generation, but , we noticed that if technological components are present within the image the AI tends to degrade it through oxidation of all elements, while on the other hand if the starting iimage is a white or black background, the dominant color in the generated image remains, and the appearance of the degradation takes on tones reminiscent of the main color and tends to be a more organic degradation such as mold, with in some cases the presence of garbage.
For the second part we used Modmatrix, a tool that can generate text, audio, images and data models of the inputs. We experimented again with generating images related to phrases and quotes from trap songs, mostly Spanish We played with an already existing picture observing all the declinations in which another image was generated based on the quote. Many of these quotes are related to concepts from the street, and because of this, site policies did not allow us to include many of the quotes as they contained “inappropriate” content.

our presentation!

Reflection

I must say that this seminar helped me to clarify my ideas on how to use AI to develop projects and ideas, and I think it is great! And I see clearly that AI is not a stand-alone tool but needs the human considerations to work perfectly and get the results you want, and at the same time it needs training from both the human and the machine to create collaboration. The issue that I found most interesting that still raises questions for me is the ethical role of artificial intelligence: that is, it on dataset bases, shows only one side of the output gereration, what I would like to call the “safe side” and leaves everything else, which is part of our reality, out. Already pra we live in a buffer reality where the internet blocks inappropriate content and insidious topics, and now AI is based on prompts that are politically correct, a conception of reality that is perhaps a bit hypocritical considering what is going on in the world. I’m just a little bit worried that society doesn’t notice the gap because they don’t want to look up from the screen anymore where the safe and happy place that is the net is.


Last update: June 24, 2024