A few interesting notes on prepping the generated file for print:
• the program setting the file up has a function that rotates the object into what it considers the ‘ideal position’, meaning the most stable way to print it.
• from this ideal position it generates the support structures.
• notably, for some of the floating cursor-spheres, the program recognizes there is a volumetric space, and that resin needs to be allocated for it—but does not generate support structures.
• this would suggest that the program aims for one contained spatial object.
This could serve, allegorically (in the same way the support structures function as algorithmic objects) as an example for the post-cybernetic control doing this exact containment of alternative structures of articulation.
This first set results from an iteration through all recorded interaction data, matching cursor positions to the corresponding computational processes (by calculating an overlapping time threshold for each instance).
This second set is a considerable improvement in veracity over the first, because the computational activity associated with the cursor—though significant—is by no means the only one. In this step, a computational space is opened up (by mapping the time of all instances of activity against the space/time-data gathered in step one).
When further refining this computational space, I am reminded of Luciana Parisi’s idea of topological control. Basically (and this is her case for algorithmic architecture both as an architectural formal practice as well as a infrastructural body), a fabric with nodes that constitute parameters which govern a specific space.
After numerous difficulties with mismatching data I have some early results from matching temporary data to interface data. Above is the result of adding isometric wraps, based on the recorded temporary data (see below), to every time my cursor movement has been recorded. For procedure see below:
When this procedure of mapping temporary to displayed data works robustly, I will start to integrate specific qualitative reactions and mutations that engage the data on a case-by-case and relational basis.
Because of the complexity and amount of the gathered raw data, and due to the fact I am repurposing it, I currently am programming custom data crawlers (small purpose-built programs iterating through the datasets line by line) that arrange data in order to be legible by future creative coding programs. Nevertheless, this step also needed a number of analytical procedures beforehand, such as a ‘semantic’ log for this first dataset—what events and processes are being obfuscated and what are they called, and which of these can be said to relate specifically to the interface in question.
Above is Google’s DevTools visualization of the dataset produced. After numerous crawling attempts, this is a first tentative result:
This is a diagram drawn computationally from all the available positions tracked and reacted upon by Facebook during my interaction. Significantly, this has been generated entirely independent from any classically visual digital data (such as information from HTML / CSS). The next step, just on the level of data analysis, is correlating the various extracted datasets chronologically together.
“The blockchain’s comprehensive ability to allocate each piece of code within its system could completely eliminate the possibility of copying a song, for example, because who has which digital copy when would be traceable. A digital magazine based on the blockchain system would have unique copies, just like a printed magazine. It could be bought and sold like a physical object.” -Hannes Grassegger, “My Wet and Wild Bitcoin Weekend On Richard Branson’s Island Refuge”, Motherboard/VICE