Friday, March 25, 2016

The Shallow

Preface

The Net seizes our attention only to scatter it.
“How do users read on the web?” he asked then. His succinct answer: “They don’t.”

The book, “The Shallow”, raises a common problem we all suffer nowadays: “we lost our deep reading ability.” The author studied on this issue from different aspects, which provides enough knownledge for us to understand why this happens. Although the book may not mention any specific solution, we can deal with it by bring up our own self-awareness for the first step.
In this post, I am sharing three different clips from the book. And, I think, they all answered some question in today’s tech world by descibing accurate insights.

AI

After AlphaGo won the Korean player, some people started to worried that the robots will take over the world. Some medias spreaded out articles with scary titles saying the robots may not be controllable in the near future. However, I totally disagree with these ideas, which I think they are barely based on the lack of knownledge. AI or robots are still machines, which do whatever we ask them to do, and they will never going to do “something out of control” if we don’t ask them to do so. It’s common and nature that people put emotions into the machine, but the machine eventually is still the machine. I believe, the way to build a real AI with its own emotion, mind, etc is to fully understand the human brain from Biological aspect. There’s the clip from the book that shares the same idea as mine.
The first academic conference dedicated to the pursuit of artificial intelligence was held back in the summer of 1956 - on the Dartmouth campus - and it seemed obvious at the time that computers would soon be able to replicated human thought. The mathematicians and engineers who convened the month-long conclave sensed that, as they wrote in a statement,“ every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” It was just a matter of writing the right programs, of rendering the conscious processes of the mind into the steps of algorithms. But despite years of subsequent effort, the workings of human intelligence have eluded precise description. In the half century since the Dartmouth conference, computers have advanced at lightning speed, yet they remain, in human terms, as dumb as stumps. Out “thinking” machines still don’t have the slightest idea what they’re thinking. Lewis Mumford’s observation that “no computer can make a new symbol out of its own resources” remains as true today as when he said it in 1967.
But the AI advocates haven’t given up. They’ve just shifted their focus. They’ve largely abandoned the goal of writing software programs that replicate human learning and other explicit features of intelligence. Instead, they’re trying to duplicate, in the circuitry of a computer, the electrical signals that buzz among the brain’s billions of neurons, in the belief that intelligence will then “emerge” from the machine as the mind emerges from the physical brain. If you can get the “overall computation” right, as Page said, then the algorithms of intelligence will write themselves. In a 1996 essay on the legacy of Kubrick’s 2001, the inventor and futurist Ray Kurzweil argued that once we’re able to scan a brain in sufficient detail to “ascertain the architecture of interneuronal connections in different regions,” we’ll be able to “design simulated neural nets that will operate in a similar fashion.” Although “we can’t yet build a brain like HAL’s,” Kurzweil concluded, “we can describe right now how we could do it.”
There’s little reasons to believe that this new approach to incubating an intelligent machine will prove any more fruitful than the old one. It, too, is built on reductive assumptions. It takes for granted that the brain operates according to the same formal mathematical rules as a computer does - that, in other words, the brain and the computer speak the same language. But that’s a fallacy born of our desire to explain phenomena we don’t understand in terms we do understand. John von Neumann himself warned against falling victim to this fallacy. “When we talk about mathematics,” he wrote toward the end of his life, “we may be discussing a secondary nervous system.” Whatever the nervous system’s language may be, “ it cannot fail to differ considerably from what we consciously and explicitly consider as mathematics.”

Modern Brain

Although I’ve spent lots of time on studying, my ability of memorizing things isn’t increased for years. I started to think that “do we really need to memorize things if there’s Internet?” The following clip from the book nicely explains the reason why people lose their momerizing skills.
What determines what we remember and what we forget? The key to memory consolidation is attentiveness. Storing explicit memories and, equally important, forming connections between them requires strong mental concentration, amplified by repetition or by intense intellectual or emotional engagement. The sharper the attention, the sharper the memory. “For a memory to persist,” writes Kandel, “the incoming information must be thoroughly and deeply processed. This is accomplished by attending to the information and associating it meaningfully and systematically with knowledge already well established in memory.” If we’re unable to attend to the information in our working memory, the information lasts only as long as the neurons that hold it maintain their electric charge - a few seconds at best. Then it’s gone, leaving little or no trace in the mind.
Attention may seem ethereal - a “ghost inside the head,” as the developmental psychologist Bruce MacCandliss says - but it’s a genuine physical state, and it produces material effects throughout the brain. Recent experiments wit mice indicate that the act of paying attention to an idea or an experience sets off a chain reaction that crisscrosses the brain. Conscious attention begins in the frontal lobes of the cerebral cortex, with the imposition of top-down, executive control overt the mind’s focus. The establishment of attention leads the neurons of the cortex to send signals to neurons in the midbrain that produce the powerful neurotransmitter dopamine. The axons of these neurons reach all the way into the hippocampus, providing a distribution channel for the neurotransmitter. Once the dopamine is funneled into the synapses of the hippocampus, it jump-starts the consolidation of explicit memory, probably by activating genes that spur the synthesis of new proteins.
The influx of competing messages that we receive whenever we go online not only overloads our working memory; it makes it much harder for our frontal lobes to concentrate our attention on any one thing. The process of memory consolidation can’t even get started. And, thanks once again to the plasticity of our neuronal pathways, the more we use the Web, the more we train our brain to be distracted - the process information very quickly and very efficiently but without sustained attention. The helps explain why many of us find it hard to concentrate even when we’re aways from out computers. Our brains become adept tat forgetting, inept at remembering. Our growing dependence on the Web’s information stores may in fact be the product of a self-perpetuating, self amplifying loop. As our use of the Web makes it harder for us to lock information into our biological memory, we’re forced to rely more and more on the Net’s capacious and easily searchable artificial memory, even if it makes us shallower thinkers.

Why CLI (Command Line Interface)

When I code, I prefer using CLI, the command line interface. The reason behind is that CLI brings me a clearer view of the whole system. The author mentioned that helpful interfaces aren’t always benefits the users, which is the part I would like to refer to the comparison between CLI and GUI.
In the early stages of solving the puzzle, the group using the helpful software made correct moves more quickly that the other group, as would be expected. But as the test proceeded, the proficiency of the members of the group using the blare-bones software increased more rapidly. In the end, those using the unhelpful program were able to solve the puzzle more quickly and with fewer wrong moves. They also reach fewer impasses - states in which no further moves were possible - than did the people using the helpful software. The findings indicated, as van Nimwegen reported, that those using the unhelpful software were better able to plan ahead and lot strategy, while those using the helpful software tended to rely on simple trial and error. Often, in fact, those with the helpful software were found “to aimlessly click around” as they tried to crack the puzzle.
Eight months after the experiment, van Nimwegen reassembled the groups and had them again work on the colored-balls puzzle as well as a variation on it. He found that the people who had originally used the unhelpful software able to solve the puzzles nearly twice as fast as those who had used the helpful software. In another test, he had a different set of volunteers use ordinary calendar software to schedule a complicated series of meetings involving overlapping groups of people. Once again, one group used helpful software that provided lots of on-screen cues, and another group used unhelpful software. The results were the same. The subjects using the unhelpful program “solved the problems with fewer superfluous moves and in a more straightforward manner,” and the demonstrated greater “plan-based behavior” and “smarter solution paths.”
In his report on the research, van Nimwegen emphasized that he controlled for variations in the participants’ fundamental cognitive skills. It was the differences in the design of the software that explained the differences in performance and learning. The subjects using the bare-bones software consistently demonstrated “more focus, more direct and economical solutions, better strategies, and better imprinting of knowledge.” The more that people depended on explicit guidance from software programs, the less engaged they were in the task and the less they ended up learning. The findings indicate, van Nimwegen concluded, that as we “externalize” problem solving and other cognitive chores to our computers, we reduce our brain’s ability “to build stable knowledge structures” - schemas, in other words - that can later “be applied in new situations.” A polemicist might put it more pointedly: The brighter the software, the dimmer the user.

0 comments:

Post a Comment