The neuroscientific case for Art in the age of Netflix
(missing author)fond of sensuous luxury or pleasure; self-indulgent (derives from the Greek city Sybaris)
the company most devoted to this sybaritic vision, Netflix
the company most devoted to this sybaritic vision, Netflix
It is common for dreams to involve nonsensical objects or borderline categories. People who are two people, places that are both your home and a spaceship. Dreams explore the statespace and in doing so warp and play with the categories, the dimensions of perception itself, stress-testing and refining. The inner fabulist shakes up the categories of the plastic brain. Their authoring avoids a phenomenon called overfitting. Overfitting, a statistical concept, is when a model is too sensitive to the data it’s been fed, and therefore stops being generalizable. It’s learning something too well. For instance, artificial neural networks have a training data set: the data that they learn from. All training sets are finite, and often the data comes from the same source and is highly correlated in some non-obvious way. Because of this, artificial neural networks are in constant danger of becoming overfitted. When a network becomes overfitted, it will be good at dealing with the training data set but will fail at other similar data sets. All learning is basically a tradeoff between specificity and generality in this manner.
The most common way to get around the universal problem of overfitting is to expand the training set. But for real brains, the learning process that produces our experiential landscape relies on the training set of life. That set is limited in many ways, highly correlated in many ways. Life alone is not a sufficient training set for the brain. Dreams prevent our brains from overfitting our experiential statespace, blurring categories and taking unlikely trajectories. The fight against overfitting every night creates a cyclical process of annealing: during wake the brain fits to its environment via learning; then during sleep the brain “heats up” through dreams that prevent it from clinging to suboptimal solutions and models.
ooh i like this
It is common for dreams to involve nonsensical objects or borderline categories. People who are two people, places that are both your home and a spaceship. Dreams explore the statespace and in doing so warp and play with the categories, the dimensions of perception itself, stress-testing and refining. The inner fabulist shakes up the categories of the plastic brain. Their authoring avoids a phenomenon called overfitting. Overfitting, a statistical concept, is when a model is too sensitive to the data it’s been fed, and therefore stops being generalizable. It’s learning something too well. For instance, artificial neural networks have a training data set: the data that they learn from. All training sets are finite, and often the data comes from the same source and is highly correlated in some non-obvious way. Because of this, artificial neural networks are in constant danger of becoming overfitted. When a network becomes overfitted, it will be good at dealing with the training data set but will fail at other similar data sets. All learning is basically a tradeoff between specificity and generality in this manner.
The most common way to get around the universal problem of overfitting is to expand the training set. But for real brains, the learning process that produces our experiential landscape relies on the training set of life. That set is limited in many ways, highly correlated in many ways. Life alone is not a sufficient training set for the brain. Dreams prevent our brains from overfitting our experiential statespace, blurring categories and taking unlikely trajectories. The fight against overfitting every night creates a cyclical process of annealing: during wake the brain fits to its environment via learning; then during sleep the brain “heats up” through dreams that prevent it from clinging to suboptimal solutions and models.
ooh i like this