Important Notice: Our web hosting provider recently started charging us for additional visits, which was unexpected. In response, we're seeking donations. Depending on the situation, we may explore different monetization options for our Community and Expert Contributors. It's crucial to provide more returns for their expertise and offer more Expert Validated Answers or AI Validated Answers. Learn more about our hosting issue here.

What is the curse of dimensionality?

curse dimensionality
0
Posted

What is the curse of dimensionality?

0

Answer by Janne Sinkkonen. Curse of dimensionality (Bellman 1961) refers to the exponential growth of hypervolume as a function of dimensionality. In the field of NNs, curse of dimensionality expresses itself in two related problems: • Many NNs can be thought of mappings from an input space to an output space. Thus, loosely speaking, an NN needs to somehow “monitor”, cover or represent every part of its input space in order to know how that part of the space should be mapped. Covering the input space takes resources, and, in the most general case, the amount of resources needed is proportional to the hypervolume of the input space. The exact formulation of “resources” and “part of the input space” depends on the type of the network and should probably be based on the concepts of information theory and differential geometry. As an example, think of a vector quantization (VQ). In VQ, a set of units competitively learns to represents an input space (this is like Kohonen’s Self-Organizing

0
10

The curse of dimensionality refers to the tendency of a state space to grow exponentially in its dimension, that is, in the number of state variables. This is of course a problem for methods such as dynamic programming and other table-based RL methods whose complexity scales linearly with the number of states. Many RL methods are able to partially escape the curse by sampling and by function approximation. Curiously, the key step in the tile-coding approach to function approximation is expanding the original state representation into a vastly higher dimensional space. This makes many complex nonlinear relationships in the original representation simpler and linear in the expanded representation. The more dimensions in the expanded representation, the more functions can be learned easily and quickly. I call this the blessing of dimensionality. It is one of the ideas behind modern support vector machines, but in fact it goes back at least as far as the perceptron.

Related Questions

What is your question?

*Sadly, we had to bring back ads too. Hopefully more targeted.

Experts123