Late night discussion subjects with Harold yesterday (lengthily and passionately argued over for hours until 4 this morning)…

Among many others, we went over (less and less focused and informed, as the discussion kept advancing):

1/ Is there any point in using an appropriately trained neural network for type-specific music compression over a regular algorithm?

2/ Is any task performed by a neural network more or less isomorphic to an interpolation function?

3/ Is any given neural network **always** replaceable by a definable iterative algorithm?

4/ Is the brain replaceable by an iterative algorithm?

5/ Is neuron transmission a discrete discontinuous function?

In a nutshell, my original stand was more or less:

YES – NO – NO – NO – YES

Harold’s being of course precisely the opposite…

Of course, neither of us really managed to convince the other, but we got to exchange some interesting viewpoints and left us both with a lot of interrogations in these areas…

As for 1/, I still think there is (among many other uses) a proven point into using a trained NN to define what is acceptable loss of quality within certain ranges of frequencies. Where a regular algorithm would typically only be able to deal with clearly defined type of music, a NN should have much better chances with anything slightly out of the ordinary. True, however, that this is not necessarily a NN-only task, just one where it would do much better than any algorithms, imho.

2/ was much more of a debate between us. Harold’s point being that given x inputs, y outputs, one can always define an interpolation function doing about as good as the network. Which I strongly disagreed with. First because continuity between values is far from granted for the output of a NN by its own nature. Second, because, even assuming this would hold for basic NN (perceptron, etc) it definitely does not as soon as we include more advanced type of NN: back propagation and “memory” neurons for example…

which got us directly into the more general:

3/ without going too deep into the mathematical theories behind that, it’s rather easy to fathom the difficulties to integrate things like recursion and feedback into a “regular” interpolation function.

Using mutual recursion of a wildly complex nature ought to do the trick theoretically, was harold’s valid point. mine being that we are then basically talking about a form of NN.

And I think that’s the bottom line of all our discussion centered around NN and computers: one can’t ignore the fact that computers are Turing machines, hence iterative algorithms implementation (or am I missing something), BUT we also know that proof of existence is far from proof of definability…

actually, I’d be seriously tempted to say that there’s way to formally define a computer as a sufficiently strong consistent system and then unleash godel’s minions of logical hell on it… but I’ll gladly admit this is a little bit beyond my realm of mathematical ease.

4/ well of course, there’s too many unknowns with the brain to give anything else than wildly random guesses.

However, *some* of what applies to a computer ought to be of *some* relevance for the brain… That is, assuming you consider the brain to be functioning somewhat digitally…

Which breaks down essentially to:

5/ is neuronal communication essentially digital or analogic ?

This one is definitely my firm ground, as I do think there’s enough facts backing this matter: although there is some chemical steps that can take fairly continuous values, the electrical threshold involved in the neuronal communication process should insure you get discrete values within the system.

To me, the mere presence of **one** “digital filter” is enough to liken the whole communication process to a digital one. am i missing something there?

It is interesting to see overall, how even the simplest neuronal architecture seem able to produce layer after layer of abstraction up to a point where the link seems non-existent. An example of a seemingly iterative process practically impossible to reproduce…

Theoreticaly, I side with Harold — there’s nothing unique about using a NN versus using conventional (usually frequency-based) mathematical transforms. Both have the capability to slam you right into the lower limit of entropy.

In the real world, I suspect David’s right. Psycho acoustic encoding is a really, really problem. It’s a lot easier to throw 100,000 monkeys into a room and have them press “better” and “worse” buttons (training a NN) than it is to throw 20 mathemeticians in the room and have them come up with anything useful at all. 🙂

Pesky human limitations. 🙂

mmm, agreed: that’s pretty much my point… theorically possible, maybe, but practically impossible, definitely.

I’m finishing the entry right now…

what it basically comes down to is: turing’s proof and the fact we are talking about computers on one side, levels of complexity and what can be reasonably considered a satisfying mathematical approximation on the other side…

oh, BTW, no need to your worry about your email address being harvested by bots, MT is normally configured to mangle the @ and dots into html entities… not 100% sure it is efficient, but so far it seems to have been.

Regarding monkeys and time/number requirements….

here are a few estimates I found on the web :o)

Hamlet: 1000 monkeys, eternity.

Windows: 10 monkeys, about a week.

Windows NT: 10 monkeys, a copy of VMS, and two weeks

.NET: All right, who taught the monkeys XML?

AIX: 20 monkeys and a truckload of crack.

Mac OS: Monkeys aren’t that stupid.

Mac OS X: Koko was a gorilla, not a monkey.

VMS: Monkeys aren’t that evil.

BSD: 100 monkeys, a bunch of duct tape, and some WD-40

System V: MONKEY is the registered trademark of AT&T Bell Laboratories.

Linux: Who needs monkeys when you have drunken Finns?

BeOS: 100 monkeys and some glitter.

Orange book level A1: I can neither confirm nor deny that monkeys exist.