It’s an interesting question because, in part, the data _can_ make a difference if it has layers of abstraction in itself to be teased out, adding additional subroutines to the algorithm. [I think of algorithms as programs, which can usually be converted from procedural to functional if one so chooses, although I prefer to go the other way].
For example, when dealing with language. Language has many layers of abstraction, dependencies and uncertainties built into it, and not _all_ of them have been successfully been teased out by Chomsky grammar in the past 50 years even still.
[and may never be]
You could do maze-solving algorithms: Those can get REALLY slow depending on methodology chosen, number of dimensions of the maze, numbers of potential pathways and exits and such.
One of the reasons why “information-as-base-unit” doesn’t work as a generic atomic level is because numbers aren’t letters aren’t words aren’t pictures aren’t sounds. They can certainly be transformed into binary or other encodings but are you _sure_ you’re capturing everything?
What about internal mental data compression routines? [this ties into the ‘capacity of the brain’ calculations as well].
ah, ok. Well, if I was in your shoes, I’d see what’s out there already, so there’s no duplication. Other people have worked on what you’re working on, have each come up with their own answers. No need to start from scratch.
Unfortunately, you won’t like the answer I came up with for them. Change the constraints of the question, removing it out of the realm of theory and putting it into practice, turning a generic algorithm search into specific case solution search.
It’s the way we deal with that stuff now. In short, solve for the particular instead of the general utilizing whatever tools are necessary. Duct tape and bubblegum.