1. Manually clipped images from Google Maps of the site
2. Python scripts cropped, named, and developed .txt files for each image.
3. ComfyUI node-based programming definition was used to analyze images and generate text for the images.
4. Flux.Dev Low Rank Adaptor (LoRA) model was trained within the ComfyUI environment from the images and their descriptions.
5. Text to Image definition was then used to generate images of the site.
6. Teachable Machine AI was trained to identify the image types and to curate sets.
7. Image sampler was used to create a Rhino topography.
1. Ladybug used to analyze site.
2. Image sampler was used to create a Rhino topography.