Featured Posts

Magic Windows & Comp. Typo | Final Prototype

December 4, 2018





Read the conceptualizing process here.

TLDR: I was inspired by Zach Lieberman & the unique physicality afforded by AR to make a typographic sculpting app.


User 1 —> Site specific AR-artifact <— User 2



...to figure out:

  1. Create a database of a font's glyphs using opentype.js

  2. Get location of device

  3. Draw AR spheres & lines based on glyph commands & location

  4. “Sculpt” the letters using AR interaction

  5. Upload the new coordinates + location to a database


So far I have figured out 1 to 4. Here’s a step by step


1. Database of a font's glyphs


Using Allison’s example code, I parsed LeagueGothic font and printed all the glyph (character) drawing commands on the console.


Because Unity ARKit can’t package JSON files and export it as part of a standalone app, I have to serve these drawing commands from a server.  So I copied these commands onto a variable in a Node server. I made a custom API to get these drawing commands:


2. Get location of device


I use Unity's Location.Service to get the latitude and longitude.


Once I have those, I feed those into Google Maps’ API to reverse-geocode to get the location name. This API often returns an array of locations, so the code just takes the first one in the array.


3. Draw spheres & lines to draw Location Name 


So far I have (1) a series of coordinates to draw a letter, and (2) location name.


Next step is to use these data to instantiate spheres and lines to draw the letters in the location name. To do this, I instantiate a sphere at each x & y, ignoring the curves in the drawing command, and use those x & y as LineRender’s coordinate.


Here’s the code:


    void drawLetters(Vector3 plane_pos){
        Debug.Log ("DRAWING LETTERS: "+test_text);
        int index = 0;
        int line = 1;
        foreach (char c in test_text) {
            var ind = findIndexOfGlyph (c.ToString ());
            if (c.ToString () == " ") { //if character is a space
                index = 0;
            var commands = allglyphs [ind] ["commands"];

            lRend.positionCount = commands.Count;

            for (var i = 0; i < commands.Count; i++) {
                //Debug.Log (commands [i] ["type"].Value);
                string type = commands [i] ["type"].Value;

                if (type != "Z") {
                    //Debug.Log (commands [i] ["x"].Value);
                    //Debug.Log (commands [i] ["y"].Value);

                    var x = float.Parse (commands [i] ["x"].Value) * scale * 0.05f + (index * x_dist * 0.1f);
                    var y = float.Parse (commands [i] ["y"].Value) * scale * 0.05f + (line * y_dist * 0.1f);

                    //Debug.Log (y);

                    var adj_x = x + plane_pos.x;
                    var adj_y = y + plane_pos.y;

                    //draw sphere
                    GameObject go = Instantiate(spawnee, new Vector3(adj_x, adj_y, plane_pos.z), Quaternion.identity);
                    //send message to prefab to assign index in array
                    go.SendMessage("Indexing", positions.Count);
                    Vector3 pos = new Vector3 (adj_x, adj_y, plane_pos.z);
                    positions.Add (pos);


        isGlyphDone = true;



4. Modify the letterform by moving spheres & lines


The logic for moving the spheres & lines is as following:

User touches screen -> (using raycasting) if a sphere is selected, sphere moves along with the position of device -> modify line accordingly.


Here it is in action:



The code is pretty convoluted so I’m not going to post it here… I will upload the entire code to GitHub soon!



To Do

  1. Save the new coordinates to a database, so that user and other users can collectively sculpt the same typographic artifact.

  2. Introduce more motion, more interactivity, more physics, to make the artifact more interesting, compelling, and meaningful.



(On connection between interactivity and reading of the text)


In this project, there are two contributors to the typographic artifact:

  1. The location of the device determines the text

  2. Users in the same location morph the letter forms


Because of location being the text and limitation of the interaction, I would argue that the visual presentation *cannot* be separated from the content of the text.


For the sake of argument, let’s say that in this case, “location” equals “text”. Users have to be in a specific location to morph the artifact. Hence, the morphed letterforms are only possible because of the location/text.


Having said that, if “location” equals “text” makes this piece particularly compelling. In the next iteration (final week), I would like to experiment with different site-specific data as the text artifact, and/or introduce another interactivity that makes the argument more compelling, or at least makes the piece looks cooler.



Share on Facebook
Share on Twitter
Please reload

Related Posts
Please reload

Generative Music | Latent Space Sequencer

December 13, 2018

Please reload

Recent Posts
Please reload

Search By Tags

Hafiyyandi | Creative Technologist

New York, New York | hafiyyandi@gmail.com