We provide three steps in the direction of shifting probability from a descriptive tool of unpredictable events to a way of understanding them. At a very elementary level we state an operational definition of probability based solely on symmetry assumptions about observed data. This definition converges, however, to the Kolmogorov one within a special large number law fashion that represents a first way of twisting features observed in the data with properties expected in the next observations. Within this probability meaning we fix a general sampling mechanism to generate random variables and extend our twisting device to computing probability distributions on population properties on the basis of the likelihood of the observed features. Here the randomness core translates from the above symmetry assumptions in a generator of unitary uniform random variables. Willing discovering suitable features (which axe classically defined as sufficient statistics), we refer directly to the notions of Kolmogorov complexity and coding theorem in particular. This is to connect the features to the inner structure of the observed data in terms of concise computer codes describing them in a well equipped computational framework. This new statistical framework allows us to recover and improve results on computational learning at both subsymbolic and symbolic stages, figuring a unique shell where the full trip from sensory data to their conceptual management might occur

Apolloni, B., Malchiodi, D., Gaito, S., Zoppis, I. (2002). Twisting features with properties. In NEURAL NETS WIRN VIETRI-01 (pp.301-312).

Twisting features with properties

ZOPPIS, ITALO FRANCESCO
2002

Abstract

We provide three steps in the direction of shifting probability from a descriptive tool of unpredictable events to a way of understanding them. At a very elementary level we state an operational definition of probability based solely on symmetry assumptions about observed data. This definition converges, however, to the Kolmogorov one within a special large number law fashion that represents a first way of twisting features observed in the data with properties expected in the next observations. Within this probability meaning we fix a general sampling mechanism to generate random variables and extend our twisting device to computing probability distributions on population properties on the basis of the likelihood of the observed features. Here the randomness core translates from the above symmetry assumptions in a generator of unitary uniform random variables. Willing discovering suitable features (which axe classically defined as sufficient statistics), we refer directly to the notions of Kolmogorov complexity and coding theorem in particular. This is to connect the features to the inner structure of the observed data in terms of concise computer codes describing them in a well equipped computational framework. This new statistical framework allows us to recover and improve results on computational learning at both subsymbolic and symbolic stages, figuring a unique shell where the full trip from sensory data to their conceptual management might occur
paper
Computational learning, statistical inference, twisting argument
English
Italian Workshop on Neural Nets (WIRN VIETRI-01) MAY 17-19
2001
Tagliaferri, R; Marinaro, M
NEURAL NETS WIRN VIETRI-01
1-85233-505-X
2002
301
312
none
Apolloni, B., Malchiodi, D., Gaito, S., Zoppis, I. (2002). Twisting features with properties. In NEURAL NETS WIRN VIETRI-01 (pp.301-312).
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10281/31469
Citazioni
  • Scopus ND
  • ???jsp.display-item.citation.isi??? 0
Social impact