Music Player application built with Free tier ChatGPT and GEMINI CLI

AI html javascript Uncategorized | Tagged

Using AI to develop a modern web application is easier than ever at the time of writing, but there are still challenges.

The biggest one is cost, a lot of AI cloud services like ChatGPT and Google Gemini do have free tiers, but in the case of ChatGPT you quickly hit the upper limits of some of the more advanced features that are on offer before it becomes almost completely unusable, as it falls back to an older model.

When the fall back to old model occurs, you quickly experience hallucinations and the ai starts producing unpredicatable results.

I’ve had some success with both ChatGPT and Google GEMINI CLI, both seperately, and on the same project.

Here are a few recent examples of this:-

The following two apps are vite projects, with an attempt at creating a standard workflow with test cases, and modern tooling with src and build processes.

Ad-Amp

https://play.gigazarak.com/adamp/index.html

An mp3 player, supply your own tracks, run a build process to read album art from the files id3 info, if not, consult last.fm for album art and artist and track info, fallback to wikipedia if no results found. All using free API endpoints.

WrdHelpr

https://adamlemmo.github.io/wrdhelpr/

Enter the letters from your word game, and it will give you the excluded words that can be used in other word helping apps like Word Hippo, so you can cheat on your favourite word games! (use responsibly)

Answezzy?

https://answezzy.gigazarak.com/index.html

The following quiz app, was made on the fly with ChatGPT and time and effort, iterating again and again, copy and pasting code from the result of the chat responses, prompting, re-prompting, honeing crafting, dodging and weaving until the desired result was achieved. This app was made early in the AI craze, and was last updated in late September 2024, since then Chat GPT has gotten a lot more dev friendly, but again, free limits and fallback to older models, make the process one of dimishing returns, at least sessionally, when tokens reset and limits are lifted after periods of time, the process begins anew!

You still need to have some idea of what a web app is, how it’s structured, what processes can and should be applied to it, AI speeds all of this up a lot, to the point where GEMINI CLI even edits the code for you. This is a mixed bag, as when it’s right, it’s great, but when it makes mistakes, it can be catastrophic, but if you make sure you’ve set the project up with a git repo, then you can always roll back to the last good version.

How to install Stable Diffusion for AI image generation (mainly on Windows but applies to other OSes if you squint)

AI Python Stable Diffusion | Tagged
This image was created with Stable Diffusion, some prompts, and a little dash of Photoshop….

AI Art is all the rage at the moment, you don’t want to miss out do you? Follow this guide to be the envy of all your friends. You’ll be generating some weird stuff in no time!

What is Stable Diffusion?

Developed in Python, Stable Diffusion is an open-source AI art generator released on August 22 2020 by Stability AI. Read more about what Stability.ai is doing at thier website.

System Requirements

  • A GPU with at least 6 gigabytes (GB) of VRAM
    • This includes most modern NVIDIA GPUs
  • ~10GB of storage space on your drive 
  • Git version control software
  • The Miniconda3 installer
  • The Stable Diffusion files from GitHub
  • The Latest Checkpoints (v1.4 is the current at the time of writing, 1.5 to be released soon)
  • Windows 8, 10, or 11, Linux and macOS

Step 1 – Software installation

The checkpoint files can be found here:

https://huggingface.co/CompVis/stable-diffusion-v-1-4-original

It’s quite a confusing page at the time of writing, so heres a helpful screen capture of where to get it on the page.

You can download “sd-v1-4.ckpt” or “sd-v1-4-full-ema.ckpt” the latter being quite large!
So grab the first one if you aren’t ready for the ‘full’ experience.

Step 2 – Folder configuration

With everything installed as per the previous section, we now need to set up a few folders on our local computer and unpack the files for Stable Diffusion.

Navigate to your “projects” folder (or similar) on your computer and make a folder called “stable-diffusion.” (eg: c:\projects\stable-diffusion)

In your file manager, open the “stable-diffusion-main.zip” file you downloaded earlier, copy the contents of this ZIP archive into the “stable-diffusion” folder you created earlier.

You should now have a file path similar to the following:

c:\projects\stable-diffusion\stable-diffusion-main

Step 3 – Environment configuration

Select the Start menu and start typing “miniconda3” then select “Anaconda Prompt (miniconda3)”

With this command line terminal open, enter the following commands one at a time.

cd C:\projects\stable-diffusion\stable-diffusion-main

This ensure that you have navigated to your stable-diffusion-main folder. (assumes you have C:\projects as your root path, change this if nessesary)

conda env create -f environment.yaml

This creates the nessesary development environment for you to run Stable Diffusion correctly, this may take a while (depending on your download speed) as some of the files are quite large, be patient.

If you weren’t patient or canceled or paused this process this for any reason, you will need to delete the environment folder, and run the conda env create -f environment.yaml command again. In that case, perform the previous command after deleting the “ldm” folder in “C:Users(Your User Account).condaenvs.”

This can also be an issue if you don’t have enough space on your hard drive (I learnt this the hard way) if so, just delete the failed attempt and start again.

conda activate ldm

This activates the conda ldm, you will need to do this everytime you want to use Stable Diffusion.

mkdir models\ldm\stable-diffusion-v1

This creates a folder to store the checkpoint file you downloaded earlier.

Copy the checkpoint file (sd-v1-4.ckpt) into this new folder. Rename this file as model.ckpt.

Step 4 – Using Stable Diffusion

Open Anaconda Prompt (miniconda3), this will show you a Terminal window, something like the following.

Navigate to C:\projects\stable-diffusion\stable-diffusion-main

Enter the following command into this prompt

conda activate ldm

Then enter the following into the prompt (replacing <YOURPROMPTHERE> with some unique text of your own)

python scripts/txt2img.py --prompt "<YOURPROMPTHERE>" --plms --ckpt sd-v1-4-full-ema.ckpt --skip_grid --n_samples 1

The console window should look similar to the following:

The console window will show a progress indicator as it creates the images, wait for this to complete, then checkout the results!

All images produced by txt2img.py can be found at:

C:\projects\stable-diffusion\stable-diffusion-main\outputs\txt2img-samples\samples

You can even generate a new image from a prompt and an image with the following script.

python scripts/img2img.py --prompt "<YOURPROMPTHERE>" --init-img "inputs/input.png" --strength 0.75 --ckpt sd-v1-4-full-ema.ckpt --skip_grid --n_samples 1

Here’s a few samples I’ve generated with this setup, captions are the prompts I used.

A photorealistic picture of Ironman from Avengers holding a lightsaber in a sword fight with Darth Vader from Star Wars
A photorealistic picture of Ironman from the Avengers in Star Wars as a Jedi
A technical drawing of Ironman from Avengers as the Vitruvian man

This article referenced the following articles in the creation of these instructions.