@LAs

Summer 2019 // Developer



@LAs was a location-adaptive mixed reality pop-up pavilion researching the potential of contemporary machine learning to balance narrative cohesion with audience agency in future storytelling systems.  The pavilion was comprised of three exhibits, plus on-boarding and off-boarding areas. 

The @LAs (read ‘at LAs’ or ‘atlas’) prototypes explored opportunities for interactive installations about the city of Los Angeles in the context of the upcoming 2028 Olympic Games. Students creating exhibit prototypes searched for ways to portray the many layers of Los Angeles, engaging international visitors and locals alike. The infrastructure and concept focused on group over individual interaction, tracking audience participation passively but on an ongoing basis: Audience members were greeted in an onboarding area where they received a unique QR code. Scanning the code with a mobile device directed visitors to a webform where they were asked several questions about their self-reported identity with respect to Los Angeles. Visitors were then free to circulate through three group interaction spaces, or exhibits. Upon entering an exhibit, visitors scanned their QR code. Scans triggered notifications which let our system know which audience members were in which exhibit at any given time. This data allowed the artists to author exhibits which dynamically changed based on who was present in the experience. Each exhibit pulled media from the same greater media repository, providing narrative cohesion as visitors explored the pavilion.

Each prototype exhibit took a distinct angle in their exploration of Los Angeles. m/UR/al aimed to use the Los Angeles Mural Conservancy Dataset to synthesize computer-generated murals which were projected for visitors to watch. Preferences chosen by audience members in the on- boarding questionnaire affected the color and style of the synthesis when they were in that section of the exhibit. activateLA used the OpenPTrack person tracking software to provide groups of visitors bodily control over exploring media about Los Angeles. Audiences selected topics from a movement-driven user interface which were used to create a collage of selected images and synthesized text. The final exhibit was a “travelling restaurant” called Sobremesa, a Spanish word for the conversation enjoyed at the table after finishing a meal. Food preferences provided by visitors at onboarding were used to create a synthesized restaurant menu which was projected in the space. This synthesis was trained on actual menu items from restaurants in Los Angeles. The exhibit used table projections and a tablet interface to encourage conversations among visitors, while an actor portraying a waiter guided conversation and provided explanation.

REMAP’s objective was to provide flexible infrastructure to allow artists to author data-driven exhibits. We identified three strategies that were used to accomplish this goal: (1) the use of conceptual and technical frameworks selected to bridge artistic and technical designs, (2) standardization on cloud storage to encourage host-independent approaches to media, and (3) an edge+cloud computing model incorporating serverless functions to encourage modular and host independent development. After the Institute, we informally evaluated the promise of the strategies based on survey responses from participants.

The strategies and outcomes are synthesized in a case study (in progress after an initial submission). Please contact me if you’re interested in a preprint!



Person tracking driven by OpenPtrack gave visitors bodily control over the media they were served in this exhibit.  The image shows three ‘lanes’ of media: folksonomies of media organization, or hashtags (left), images scraped from the internet with LA geotags (middle), and synthesized text trained on song lyrics about Los Angeles (right), shown over authored footage of a drive from Inglewood to UCLA, two sites of the upcoming 2028 Olympic games.  An accompanying soundscape was generated by speaking the synthesized text in multiple languages using translational and text-to-speech services.



In Sobremesa,  visitors were invited to chat over a ‘meal’, whose menu was generated based on visitor-provided food preferences supplied at on-boarding.  The menu’s were generated based on actual LA restaurant menus scraped from the internet.  



A behind-the-scenes development work view:



The scenic design of the space was that of a metro station, a reference to the massive overhaul in public infrastructure necessary for Los Angeles to host the Olympics, and the resulting advantages and problems that result.



The stills above are used with the following permission:
UCLA TFT Future Storytelling Summer Institute 2019.   Copyright (C) 2019 Regents of the University of California.  Used by permission.