Featured Posts

PComp | The ❤️ Machine

December 13, 2017

"Having Fans Means Needing Fans"

- Zoe Fraade-Blanar & Aaron M Glazer

Concept

 

The piece is an exploration of how far can a relationship between a fan and a celebrity be taken. It is the last part of a 3-part study on fandom culture (see first part here).

 

In the era of social media, fans now have access to celebrities' personal lives. The celebrities, in turn, leave trails for their fans in the form of interviews, social posts, and symbolisms in their work. Fans need their celebrity, and celebrities need their fans.

 

But what if fans had more than just digital access to their celebrity? What if the perversion of this relationship were taken to the extreme, where the fan physically owned the celebrity?

 

The ❤️ Machine is a manifestation of this idea of celebrity-ownership. It is a tamagotchi, a "virtual pet" that you can carry around and have to keep alive. But instead of an animal, the machine houses Taylor Swift.

Interaction Design

 

The design of the piece takes cues from various sources.

 

Product design cues from tamagotchi:

  • Physicality: portable, friendly

  • Emotion(s) evoked: curiosity, love, fun

  • Code logic: different emotional states of the pet (e.g. sad, happy, dead)

 

Interaction cues from keeping actual pet (dog/cat):

  • Petting

  • Carry, hug

 

Interaction cues from actual fan-celebrity relationship:

  • Celebrity sings

  • Fan sings along and keeps consuming the song

Implementation

 

Sensors:

 

Actuators:

 

Fabrication:

  • Soft plastic casing

  • Faux fur (duster refill)

  • Stickers

 

Code Logic:

  1. The program keeps track of Taylor's emotions through "mentalState' value. This value is decremented overtime, but incremented once the program sensed certain inputs.

  2. For example, happy state is when mentalState is >2000, and sad is when mentalState is <-1000.

  3. For each loop, the program listens to different sensors.

    1. FSR sensing is a simple sensing of values above a certain threshold.

    2. Shake sensing counts every time the switch is toggled. Once count reaches certain number, the program activates certain reaction.

    3. The VR module listens to pre-trained sentences (only 1 voice) for input.

  4. Every input triggers different screen color for the LCD screen. This serves as feedback to user that an input has been recognized.

  5. Each mentalState (e.g. happy, bored, sad) loads different image onto the LCD screen.

  6. A special command, "Sing to me!" will trigger the mp3 module to play a pre-saved mp3 file through the speaker.

 

Challenges:

  • Hardware

    • wiring the LCD Screen was the toughest part. It is surprisingly very fragile and I had to buy 3 of the screen to finally have the solder and connection proper.

  • Software

    • Arduino UNO only has so much memory built in, so I had troubles putting everything together. The voice recognition and display library already takes about 70% of the Arduino's RAM, so my main program coudn't be too bulky.

    • I had planned for the machine to display various dialogs but due to the large space String variables take, I had to cut them.

 

Source code can be found here.

 

 

 

Share on Facebook
Share on Twitter
Please reload

Related Posts
Please reload

Generative Music | Latent Space Sequencer

December 13, 2018

1/10
Please reload

Recent Posts
Please reload

Search By Tags