

_gif.gif)

Q: Is Physical-to-Digital Transformation Always Right? Does Multi-state Exist?
This Experimental Camera is an AR world-building tool that allows users to draw objects in their physical space using gesture recognition, which are then identified by a Teachable Machine. Once recognized, the camera pulls related sentences from the Google Books API, enabling users to generate a personalized story based on their drawings. This project blends physical and digital spaces, empowering users to interactively build narratives in an augmented reality environment.
Google Books API + Machine Learning + Object Detection Handpose
A: Reimagine Our Physical Space Through Our Abstract Thoughts - Building a personalized narratives rooted in the user's environment by using users' drawings to create a mixed reality that tells stories through machine learning, allowing a custom story to emerge from the interaction.

Q: How can we break the habit of always looking ahead and instead apply the same vision to the present, paving the way to reach our ultimate future?
This is an online tool for the 21st century mixed media creatives (artists, designers, writers, and more types of creator) to resolve their creativity blocks by reimagining and remixing their current space and time. As creatives, we have always imagine the world going forward with a new vision.

A: Stories, stories are the answer. Desolating physical solid into point cloud, allow senses to be independent as molecules

Pointcloud + Physical presence + AICo-Creation
Q: Are we separated from our environment?
This is animation is based on the AR tool I created called "AR Decentralized World Building Camera" camera based on users' drawings. The goal is to reverse the existing trend of bringing the physical world to the digital. This camera brings the digital world's abilities to the physical world in creating a third realm, where the existing can be altered and reimagined through the unlimited information provided by the digital world.
This is final project assignment from Digital Asset: 3D Class instrcuted by Qianqian Ye during Sophomore year at Parsons School of Design under Design and Technology major.
Unity 3D + Pointcloud Photogrammetry + Particle Shader
Poitory is an animation project that combines photogrammetry, point cloud technology, and artificial intelligence to create a unique storytelling experience. It builds upon the concept of an AR decentralized world-building camera, using Colmap's point cloud technology to deconstruct and molecularize a space and its objects. This process removes the boundaries between objects, allowing for free mixing while retaining data about each point's origin. Users can interact with this molecularized space by connecting dots to form lines and planes, ultimately creating new meshes. As users redevelop the space using the original image data, the system identifies the newly created objects and searches the Google Books API for relevant story phrases. These phrases are then combined into a coherent narrative using the Rita.js library, creating a unique story based on the user's reconstruction of the space.


This project has three products: 2D Experimental Camera, Concept Animation, and 3D Experimental Camera, please choose one to view
A: Our existence defines our environment - releasing our senses to blend into the environment and soften the virtual environment to be customizable, allowing you to experience its true nature.
This project, titled "AR Decentralized World Building Camera [Point Cloud Version]", is an innovative tool designed for 21st-century creatives, including artists, designers, and writers. It aims to help users overcome creative blocks by reimagining their current space through a blend of physical and digital realities. Using AR technology, the camera transforms the user’s webcam feed into 3D point clouds, allowing users to manipulate their environment and generate new meshes. The camera then scans these meshes to detect recognizable objects, which are linked to the Google Books
