

It’s getting to the point now that I can use the AI to train the AI further. I had to to listen manually to one million sounds an curate them into usable broad categories. It was incredibly time consuming to train my AI. Perhaps someone will come up with a better approach, but at this point, all this is brand new territory. The AI in AudioFinder is completely unique, I invented it.
ICED AUDIO AUDIOFINDER VS MANUAL
This is why for the last few years AudioFinder’s feature set has remained fixed as I experimented and developed this AI, there’s little point in perusing a manual tagging database, it doesn’t scale into the future. I don’t enjoy the chore of listening to a sound and tagging either, it’s just a complete waste of energy. Manually tagging sounds breaks down at scale because it’s not possible to listen to all the sounds I have and then tag them. This is why I’ve spent the last few years developing an AI system, because it can be used to break sounds down into broader categories and then in the broader categories I have a better change of finding the sound I need.

So my random workflow broke down at that scale. However, over time the problem got much more complicated because I eventually ended up with millions of sounds.

When I got too many sounds I added the database to allow tagging them and that was good for a while. I first wrote it almost 20 years ago when my hard drive started filling up with samples and I had no easy way to play them. But what if you don’t know they are snares? They are called sound123.wav? Then what? If you have 4000 snares for instance, your brain won’t remember them all even if you listen to them. What I mean by that is I scan all my sounds and then use the random feature to pick sounds that I may or may not use in a project. The way I always used AudioFinder is for lack of a better word randomly. I saw little point of doing anything other than fixing the Finding feature in AudioFinder. People have misinterpreted this as lack of attention of my part, even tho I have been posting messages that I was working on a big new feature for years. The Finding part was breaking down at scale. At that point I stopped just adding random feature requests into AudioFinder because it’s already got too much stuff in it and the name is Audio “Finder” after all. You need to computer to do the filtering for you based on AI to go further. About 5 years ago I decided it was out of hand, there was no database or tricks I could add that would scale to the task at hand. What I mean by that is think about 20 years ago, how big were hard drives and how many sample libraries where there? AudioFinder became more complex and tricky to use because the number of sounds in the world to manage grew exponentially. After 2010 AudioFinder started reaching the limits of what’s humanly possibly to manage. It was the first macOS X native sample manager ever released. It’s important to take into account AudioFinder’s long history.
