Late night discussion subjects with Harold yesterday (lengthily and passionately argued over for hours until 4 this morning)…
Among many others, we went over (less and less focused and informed, as the discussion kept advancing):
1/ Is there any point in using an appropriately trained neural network for type-specific music compression over a regular algorithm?
2/ Is any task performed by a neural network more or less isomorphic to an interpolation function?
3/ Is any given neural network always replaceable by a definable iterative algorithm?
4/ Is the brain replaceable by an iterative algorithm?
5/ Is neuron transmission a discrete discontinuous function?
In a nutshell, my original stand was more or less:
YES – NO – NO – NO – YES
Harold’s being of course precisely the opposite…
Of course, neither of us really managed to convince the other, but we got to exchange some interesting viewpoints and left us both with a lot of interrogations in these areas…
As for 1/, I still think there is (among many other uses) a proven point into using a trained NN to define what is acceptable loss of quality within certain ranges of frequencies. Where a regular algorithm would typically only be able to deal with clearly defined type of music, a NN should have much better chances with anything slightly out of the ordinary. True, however, that this is not necessarily a NN-only task, just one where it would do much better than any algorithms, imho.
2/ was much more of a debate between us. Harold’s point being that given x inputs, y outputs, one can always define an interpolation function doing about as good as the network. Which I strongly disagreed with. First because continuity between values is far from granted for the output of a NN by its own nature. Second, because, even assuming this would hold for basic NN (perceptron, etc) it definitely does not as soon as we include more advanced type of NN: back propagation and “memory” neurons for example…
which got us directly into the more general:
3/ without going too deep into the mathematical theories behind that, it’s rather easy to fathom the difficulties to integrate things like recursion and feedback into a “regular” interpolation function.
Using mutual recursion of a wildly complex nature ought to do the trick theoretically, was harold’s valid point. mine being that we are then basically talking about a form of NN.
And I think that’s the bottom line of all our discussion centered around NN and computers: one can’t ignore the fact that computers are Turing machines, hence iterative algorithms implementation (or am I missing something), BUT we also know that proof of existence is far from proof of definability…
actually, I’d be seriously tempted to say that there’s way to formally define a computer as a sufficiently strong consistent system and then unleash godel’s minions of logical hell on it… but I’ll gladly admit this is a little bit beyond my realm of mathematical ease.
4/ well of course, there’s too many unknowns with the brain to give anything else than wildly random guesses.
However, some of what applies to a computer ought to be of some relevance for the brain… That is, assuming you consider the brain to be functioning somewhat digitally…
Which breaks down essentially to:
5/ is neuronal communication essentially digital or analogic ?
This one is definitely my firm ground, as I do think there’s enough facts backing this matter: although there is some chemical steps that can take fairly continuous values, the electrical threshold involved in the neuronal communication process should insure you get discrete values within the system.
To me, the mere presence of one “digital filter” is enough to liken the whole communication process to a digital one. am i missing something there?
It is interesting to see overall, how even the simplest neuronal architecture seem able to produce layer after layer of abstraction up to a point where the link seems non-existent. An example of a seemingly iterative process practically impossible to reproduce…