On node.js convolution

From an email thread to various CIE cohorts

On Apr 19, 2026, at 4:31 PM, p.edelstein@composers-inside-electronics.net wrote:

For a rainy day…

This is a bit of a stretch follow-on to various discussion and threads with you all. I think this bodes well for various future projects.  

I was gobsmacked how quick and easy it was to have google AI write the HTML and node.js backend code for a web page that automatically populates from server side “samples” and “impulses” found in respective folders and plays a user selectable combination through a convolution.

I was procrastinating writing this by hand (literally for years). Based on other latest peaks and pokes, nailed an AI prompt on the first try.  

If anybody offers encouragement, I’ll get this running in more accessible manner sooner rather than later. Just need to find easiest way for public hosting of node.js backend – either externally accessible home server or render.com.

Looks very similar to the previous pure HTML example earlier in the week (link) . Under the covers, this is a somewhat different technical approach than the earlier example where the file lists needed to be hand built and were much more static and lacked scalability and variability. This is important stepping stone for some of the ways I can see using this to ease of developer time. 

Here is snapshot of the resulting web page. Works on my local network.  Some fiddling required for access over the public internet.

I think this took all of 10 minutes from typing in the prompt to google and the running web application by following the instructions in the generated response (and I’m no good at following instructions).  The email here is way more time consuming.  

This was very much what I expected on what it should look like from a code standpoint.  I would have pieced this together from old fashioned programming aides.  It would have taken me 3 days of hunting and pecking to get the syntax correct.  Little time or appetite for that.

Works as expected from browser on iPhone, iPad, desktop.  Will try also on RPI’s and PiZero. Just modified one line of code in the javascript source file from “localhost” to the IP address of the local computer that this runs on for demonstration for access across multiple devices. 

Technicaly, the key is some familiarity and appreciation of node.js .  Kind of swims in the the geeky waters of state of the art complex web based app development. I’ve been using node.js more and more since the native object(s) for this surfaced in MAX couple of years back. My first experience on node.js was a tip from Ron Kuivila on a javascript based interface builder from Charlie Roberts that grabbed my attention in the way back (link).  From the standpoint of tech history, this was as node.js was becoming ubiquitous for web software development a decade ago.  Embarrassingly, it’s taking me years to be at least a little productive with this and the recent AI goop works wonders as an accelerant for this old hacker.

The ability of the latest LLM’s to bang this out is stunning.

The AI prompt was just 

generate a node.js application that builds a list of audio file from a sample folder and a list of audio impulse from an impulse folder and allows a user to select the sample and play that through the selected convolution

I pasted the prompt and response here … link  so I can find this for future reference and share with others. 

As intended, this works nicely on a local network as would be configured and expanded to use as part of various sound installation things.  

Bunch of things to try next:

  • see how this runs on :
    • Raspberry PI server (this is really light weight and portable)
    • RPI and PiZero clients (for distributed rather than central architecture for an installation) – could be Arduino ESP32 with wifi as well but the PiZero probably more attractive.
  • expand to longer sample files – just drag and drop and restart 
  • move over to public accessible IP address on my home network
    • setup port forwarding on my home FIOS router or all the clicks for render.com
  • play around with logic for more extended interactive and modulated self-playing
  • show how this can be launched on a visitor smartphone from a QR code from anywhere (really wants then to be on https certificate enabled backend.  There are free solutions for this as outlined in the AI response and some other reasonable low cost alternatives. .
  • add
    • additional playback controls
    • multiple concurrent voices
    • add feed to other backends that would allow interaction with real objects (for example – play sample through bespoke instruments and objects)
  • integrate with iPhone sensors – there are some subtleties here with browser privacy domain rules (for another day)
  • a wearable variation

This is enticing as several of us have discussed using text description for producing with AI audio processing and generation mechanisms beyond the convolution example.  

I keep thinking about cycling 74 RNBO that may be obviated in some ways now.  Also prompted a Daisy Seed convolution prompt (link).

This is in the bucket of idea that have been gnawing away for years. Quite suddenly with the latest LLM’s, the effort involved has become significantly reduced. 

Stay tuned.

Best 

Phil

This QR code as-is works on a local network… I imagine in some future iteration pasted at the entrance of a gallery with label TRY ME.