Special Topics In Fractal Dimensionality

Hours (if not days) of homebound fractal fun can now be had with this Java-based application created by Steve Crampton. Just plug in a thresholded image file and watch this li’l piece o’ work go to town:



A fractal dimension of 1.0 indicates that there are apparently no patterns that repeat at different scales. A fractal dimension greater than 1.0 (but less than 2.0) indicates some degree of self-similarity. A dimension of 2.0 indicates that the object is a 2-dimensional object, for example, a plane.

The program finds what is called the “box-counting dimension” using a Monte Carlo algorithm. As each level is processed, the boxes are displayed.



And speaking of box-counting, you may be surprised to know that this (supposedly) 3-D porn trading card actually has a fractal dimension of…



image



image



1.52!



So let’s try some more:



John McCain



image



image



Fractal dimension: 1.66



A Wall of Dildos



image



image



Fractal dimension: 1.56



David Lee Roth



image



image



Fractal dimension: 1.51



1985 Topps (#73) Autographed Rusty Kuntz baseball card



image



image



Fractal dimension: 1.71





New Jersey



image



image



Fractal dimension: 1.59



Pegging



image



image



Fractal dimension: 1.15



"True Blue" by Andy Thomas



image



image



Fractal dimension: 1.71



The Manual by The Timelords



image



image



Fractal dimension: 1.62





**As a smart ass, I feel compelled to tell you that I know this is not how fractal dimensionality (Haussdorf-Besicovich dimensionality, if you’re nasty) actually works. For more on the fascinating subject of topological dimensions, check out this introduction.


Two Out Of Three Ain’t Shit: Advanced Handling Of Large Data Sets

                              image



In the hypothesis-driven field of scientific research, Yakir Reshef is quietly shaping a new perspective on the assessment of large-scale data sets. “Our tool is a hypothesis generator,” says Reshef, the co-leader of a team of researchers from the Broad Institute and Harvard University who recently published a paper outlining a statistical method for revealing relationships among variables in complex data sets.



“The standard paradigm is hypothesis-driven science, where you come up with a hypothesis based on your personal observations. But by exploring the data, you get ideas for hypotheses that would never have occurred to you otherwise.”



He’s referring to the ability of his team’s method, part of a set of statistical tools known as MINE (Maximal Information-based Nonparametric Exploration), to reveal unexpected connections between the bits of information represented in a very large data set. The team’s contribution is the maximum information coefficient (MIC), which allows statisticians to perform the equivalent of a line-dragging expedition through a large set of data, turning up unforeseen and possibly novel connections between anything from athletes’ performance and their salaries to female obesity and social status in Manila.



Traditionally, statisticians looking to support or deny the existence of a connection between variables will use a single method of data filtering that corresponds to the connection sought by their hypothesis. A researcher concerned with the question of whether a correlation exists between exam week and poor game performance by college quarterbacks will assemble a data set that includes performance stats for a number of college quarterbacks during a given season. They will then look for discrepancies between each quarterback’s average performance and his performance during games that occurred at the end of exam week. Whether they uncover evidence of a pattern, or find no such evidence, they report their findings. While this research provides a satisfying answer to the original question (assuming the research methods are sound), it overlooks a wealth of other connections - consonance or coincidence - such as whether the frequency of sacks in the first half has anything to do with the number of passing yards in the second half, or whether rate of completion in the first quarter seems to be related to performance in last week’s game. Subsequent research may examine these hypotheses, but then again, it may not, if no one thinks to ask these specific questions. The drive behind discovery, in other words, is the creative and curious mind of the scientist.



While there is no shortage of creative and curious scientists in our era of unprecedented technological development, the ratio of finite minds to infinite problems remains a handicap. Computer-assisted research has come a long way since the days of Edward Lorenz’s stalwart but slow Royal McBee, and while this increased processing capability has given an immeasurable boost to our ability to analyze large amounts of data, the efficacy of computer research is ultimately limited by the kinds of questions scientists ask. Computers can sift through the data, but they can’t formulate the hypothesis. Ingenuity, an eye for nuance and suggestion, and a willingness to hypothesize against prevailing research are all research tools equal (or perhaps greater) in importance to processing power or size of a data set.



For a researcher, choosing a scale on which to represent the data in a large set is another important step, as the choice of a unit size necessarily impacts the expression of findings. To determine the most appropriate unit size, a researcher must decide which scale would best represent the specific relationships he seeks to examine within the data. But in the case of large data sets with many relationships, a researcher may not be aware of what he is looking for, only that they want to know what kind and quantity of relationships between variables exist. Choosing a scale is then problematic, since no relationship has yet been identified on which to base the scale. Representing the multiple relationships contained in a large data set also makes it difficult to choose a scale, because different types of correlations may be best expressed on different scales.



Compounding this dilemma are the different statistical tests used to determine the presence of different kinds of patterns. Choosing a particular statistical method by which to “read” the data narrows the range of patterns that can be found within it, as the method will necessarily discover only the types of correlations it is configured to find, while overlooking others, creating a lack of equitability. The MIC development team sought to provide a method that identified correlations between variables without giving preference to certain kinds of patterns, identifying any kind of clear structure present within a data set and teasing out all varieties of relationships between its multiple variables.



Because MINE methods discover many different kinds of correlations, one might expect some measure of difficulty in representing the variety of complex relationships brought to light by such a method, given the earlier discussion of scale and its dependence on the nature of a relationship. The task of representing multiple relationships creates a dilemma if one uses the scale-dependent technique of absolute measure. However, the MIC approach solves this problem by analyzing the data set in terms of correlations between variables rather than absolute measurements, an essential asset when the exact nature of the information sought is not known.



Immense sets of data exist everywhere and measure nearly every conceivable aspect of life on earth, from the movement of Antarctic glaciers to the banking habits of Guayanese expatriates to the number of ice cream sundaes in souvenir plastic caps sold during the 7th inning stretch at Yankee Stadium. Vast data stores offer an inconceivable richness of possible insights into the complex behaviors of systems, but their exploitation tends toward the more lucrative species of insight. Insurance companies, online retailers and among others, use the information contained in these large data sets to maximize their profit margins. Given the volume of non-salient information necessarily contained by such as set, this use may seem to call for an approach that starts with specific questions and mines the set for answers, just as traditional market research operates. While the researchers undertaking such operations may be well-trained in the science of extracting meaningful information from their data, and sensitive to its implicit properties, they may nevertheless miss some novel correlation simply because they weren’t looking for it. This is where the MIC method would be helpful: by generating a list of all properties that share a common pattern of movement, it enables researchers to tick through the “unimportant” connections and focus on the important ones they may have otherwise missed.



Depending on whether history sees fit to incorporate two paradigm-goosing analytical projects into one unprecedented election strategy in 2012, this year could be an interesting one for statisticians and incumbents alike. Using an approach with the frankly terrifying appellation of “microlistening”, Barack Obama’s re-election team hopes to crunch a mother lode of data consisting of personal narratives, arguments, accolades and criticism sent via text message in order to come up with an accurate read on the attitudes of Obama supporters and detractors. They plan to use the resulting checklist to formulate Obama’s re-election strategy. In years gone by, such work would be performed by a battery of interns armed with felt-tip pens and check boxes marked “Approve”, “Disapprove” and “Neutral”. And unless the volume of correspondence was small, there was no way every letter would be read. Given the staggering load of opinions generated by the Obama team, such an approach could only handle a tiny fraction of these missives. The method: Ascertain what people are really saying by identifying the statistical patterns contained in a sampling of data, then extend those findings across the entire set, analyzing word choice, context and grammatical construction to guess the specific concerns behind these anonymous submissions. The Obama people may have a pretty good idea what concerns voters, and chances are they could draw up a pretty good list just by trolling the cable news cycle for a couple days, but the campaign needs more than that. Campaign strategy, even at the local level, is approaching a standard of near-surgical precision with regard to the management of budgeting, advertising and speechwriting, and there’s evidence that this micro-structuring can change outcomes. Will it work for Obama? Who the fuck knows. The more interesting question: Is there a correlation between the quantity of lumberjack shirts and the number of Brinsley Schwarz records owned by a Canvey Island punter in 1974, give or take a preference for Dr. Feelgood?

Image: Still from Alessandro Capozzo’s “Relations”, 2004


klbkultur:

Walter Tandy Murch : The Calculator. 1949
/via paintingperceptions

klbkultur:

Walter Tandy Murch : The Calculator. 1949

/via paintingperceptions


When Mutants Fight, We All Win

(The first in a series of recreational Science News facsimiles.)

The careful replication of genetic material that gives offspring their parents’ characteristics may seem like the antithesis of the rollicking, snarling free-for-all that many of us picture when we hear the phrase “natural selection”. But actually, this painstaking process (called conservation) is at the heart of the wildly competitive contest that shapes the course of evolution. Evolution is the result of genetic mutations that perpetuate throughout a species, eventually becoming the genetic norm. These mutations, in order to dominate their macromolecular cousins, must present some advantage which allows its possessors to survive longer and procreate more frequently than those with the original genetic trait, thus increasing the mutants’ ability to pass on the mutated gene.



If increased procreation comes at a competitor’s expense, then the mutated gene enjoys not only greater dispersion but also fewer copies of the original gene to compete against. But of course, there’s a catch. The original gene that the mutant gene is seeking to displace is itself a mutant, having emerged from the countless reconfigurations of species DNA going back to the first single-celled organism that contributed its genetic payload. It’s also had to fight for its genetic foothold, and it couldn’t have earned a place at the table unless it provided a distinct advantage to the animals who possessed it. These hereditary stalwarts may be old, but they’re still the same wily, scheming mutants (in a manner of speaking) at heart (also in a manner of speaking).

So, the “new” mutants are going up against strong, time-tested mutants who actually have the advantage, in that they’ve exerted an influence over subsequent genetic mutations, thus actually configuring the genetic environment (and by extension the whole organism) to suit and protect them. So the battle is squarely on their turf. This means the genetic upstarts must be extra-helpful to the species in order to displace these fortified stretches of DNA. If the original copies capitulated without much of a fight, they could be easily replaced by mutations that provided only a tiny survival advantage, thus making the evolutionary process slow - and messy, since so many mutations would have a chance to disperse themselves throughout the population. But when strong copies battle versatile mutants, the species as a whole wins. Conservation creates choice conditions for competition, creating genetic diversity out of genetic orthodoxy.


Everything I Needed To Know About Science, I Learned From The Side Of A Van, Part 4: Fluid Dynamics, Fractals and Levy Flights



Alright, so IT AIN’T A VAN. But it is representative of the volatile artistic climate of van customization! Just as Mondrian and Kandinsky rejected traditional representations in favor of striking geometry, many custom van artists of the 1970s adopted an aesthetic of clean, geometric lines as a reaction against the scenic and symbolic decorations festooning the ubiquitous VW buses and campers of the 1960s. And just as the calculated compositions of the Constructivists were superseded by the wild, performance-based art of the abstract expressionists, so the clean lines adorning the hulking mobile entertainment centers of the 1970s gave way to the dynamic style of splatter painting in the 1980s. There are few surviving examples of splatter-painted vehicles due to the limited collector’s interest in these pieces, and also to the fact that splatter painting, as an “experimental” treatment, had the tendency to devalue a vehicle.



Unfortunately, not much is known about the individual Splatter Painters who comprised this movement. There may have been dozens of passionate Pollocks, de Koonings and Klines languishing in latex-smeared obscurity in the overheated womb of the body shop, but rather than exhibit their work in provocative fashion - staging confrontational events to demonstrate their contempt for classic van art - they preferred to work in relative anonymity. A few Splattered vehicles might be spotted at a van or truck show, but on the whole, the artists and their clients preferred (in true action painter style) to recognize art as a process and not an endpoint. The owners of Splattered vehicles preferred to use these works of art to extricate stumps from the ground, haul trailers and transport large mysterious bundles covered with tarps, rather than park them in the driveway for neighbors to admire.



So what does Splatter painting have to do with science? Fucking plenty! From the fluid dynamics equations that map out the path of paint to panel, to the Gaussian random motion of the artist’s hand at work, to the potentially unique fractal patterns that characterize each artist’s technique, scientific thinking pervades all aspects of Splatter art. Cantor dusts and Sierpinski carpets, mathematical terms themselves redolent of the ’70s customization craze, are also proposed characteristics of the Splatter artist’s trade.



Earlier this year, a team of art historians and mathematicians published what’s believed to be the first quantitative analysis of Pollock’s drip painting method. From Wired Science:



The team focused on the painting Untitled 1948-49, which features wiggling lines and curlicues of red paint. Those loops formed through a fluid instability called coiling, in which thick fluids fold onto themselves like coils of rope.



“People thought perhaps Pollock created this effect by wiggling his hand in a sinusoidal way, but he didn’t,” Herczynski said.



Coiling is familiar to anyone who’s ever squeezed honey on toast, but it’s only recently grabbed the attention of physicists. Recent studies have shown that the patterns fluids form as they fall depends on their viscosity and their speed. Viscous liquids fall in straight lines when moving quickly, but form loops, squiggles and figure eights when poured slowly, as seen in this video of honey falling on a conveyor belt.



The first physics papers that touched on this phenomenon appeared in the late 1950s, but Pollock knew all about it in 1948. Pollock was famous for searching for using different kinds of paints than anyone else in the art world, and mixing his paints with solvents to make them thicker or thinner. Instead of using a brush or pouring paint directly from a can, he lifted paint with a rod and let it dribble onto the canvas in continuous streams. By moving his arm at different speeds and using paints of different thicknesses, he could control how much coiling showed up in the final painting.





The dots, spots, coils, streaks and splatters of Pollock’s canvases tell the story of an interaction between the artist and the forces of nature operating within the gap between hand and canvas. To apply paint to his canvases, Pollock laid them on the ground and dripped paint onto them using a stick or trowel, producing different effects by adjusting the height and the angle of application. The paint, when dropped from above, becomes a free fluid jet that can exhibit a few different kinds of instability. It can break into drops in the air, splash when contacting a surface, or it can fold and coil in much the same way a squirt of oil paint from the tube coils onto an artist’s palette. The recognizable dashes, splashes, trails and coils of Pollock’s paintings are the evidence of these principles at work.



In 1999, a mathematician named Richard Taylor published a paper claiming that patterns within the characteristic splatters of a Pollock painting did more than reveal his technique: they could be used to positively identify the artist. Taylor reported finding fractal patterns in Pollock’s drip paintings that were consistent from one work of art to the next, thus constituting a veritable artistic “signature” hidden deep within his paintings. After learning that Taylor’s criteria were actually being used to authenticate a number of recently discovered Pollocks, an alarmed team of physicists, led by Katherine Jones-Smith and Harsh Mathur, quickly published a criticism of his research. They argued that no fractal characteristics  actually appear in Pollock’s paintings, because the regions Taylor cited were too small to be usefully considered as fractals. As further proof, the physicists included a simple digital drawing made by a member of the team that met all the “fractal” criteria that supposedly identifies a Pollock painting as an original.



Nevertheless, Taylor continued to insist that Pollock’s tiny patterns do indeed constitute fractals. His persistence paid off when his argument received perhaps the king of all credible endorsements: the blessing of the founding father of fractals, Benoit Mandelbrot.



It has also been claimed that Pollock’s paintings contain evidence of random motions known as Levy flights, patterns of movement often seen in turbulent fluids Levy flights are made up of long trajectories interspersed with short, random movements. Levy flight motion also characterizes the migration patterns of animals foraging for food, which generally consist of short trips until a food source is exhausted, followed by a long trip in search of another food source.





Levy flights are known to generate fractals, which would support Taylor’s findings if Pollock’s works do in fact contain the signature paths of Levy flight. Smith-Jones and Mathur argue that the visible trajectories in a Pollock are actually more aligned with a pattern called the Gaussian random walk, which displays a normal distribution of movement.





THERE. I TOLD YOU IT WAS SCIENCE


Everything I Needed To Know About Science, I Learned From The Side Of A Van, Part 3: Chess

  • The number of possible, unique chess games is greater than the number of electrons in the universe.
  • The Isle of Lewis chess pieces are the oldest surviving complete chess set known, dating back to the 12th century. Discovered in the Outer Hebrides of Scotland and crafted from walrus ivory, they are thought to have originated from Norway.
  • The word “checkmate” comes from the Persian shah mat, which means “the king is defeated”.
  • The rook’s name comes from the Arabic word rukh, meaning chariot. During the Middle Ages, when chariots were no longer in use, the rook was gradually modified to resemble the turret of a castle.
  • The shortest recorded stalemate took place in just 10 moves.
  • The longest chess game on record took 269 moves to produce a draw.
  • The longest game of chess theoretically possible is 5,949 moves.
  • The shortest chess game on record produced a checkmate in only 2 moves.
  • There are 318, 979,564,000 possible ways of playing the first four moves for both sides in a game.
  • There are approximately 169,518,829,100,544,000,000,000 ways to play the first 10 moves in a game of chess.
  • IBM’s Deep Thought became the first computer to beat a chess grandmaster by defeating Garry Kasparov in 1988.

What does it mean to be a good chess player? Well, I guess you could say good players win and bad players don’t, which is perfectly fair, and it serves me right for asking such a dick stupid question, but what happens when experts match wits with experts? What forces are at play that ultimately decide who wins and loses? When a human beats a computer, by what virtues of his game does he do so? And when a computer loses to a human, what deficiencies in its play caused its downfall? Research suggests that the gap between expert and novice players - and human wetware and computer software - exists at a deeper level of play than previously thought. Expert (human) chess players use a failure-based model to scientifically evaluate their available moves, rejecting one only when it proves itself to be untenable. By contrast novice players (and computers) tend to think themselves logically through the game, evaluating possible moves based on rough (human) or precise (computer) statistical calculations of the probable success of a move in the context of preceding play and projected future play.

In a study conducted by Cowley and Byrne of Trinity College, Dublin in 2006, the two researchers analyzed the playing styles of xxx novice and expert players. They found that novice players tended to establish their strategy early in the game, sticking to it even when it started to work against them. Established players, by contrast, consistently re-evaluated their approach every time the board changed, quickly rejecting any untenable strategy they had been utilizing. Cowley and Byrne propose that these results are evidence that domain expertise may facilitate falsification, which is a crisp, scientific way of saying that expert players generate move sequences that falsify their plans more readily than novice players, who tend to favor move sequences that confirm their plans.



The behavior of the novice player corresponds to what is called confirmation bias, our tendency to search for confirming evidence that reinforces our beliefs, and avoid information that contradicts what we think we know. Confirmation bias contributes to irrational decision-making and tends to reinforce prejudices, yet it appears to be the most common method of evaluation even for hypotheses amenable to easy scientific testing (say, the taste appeal of an unfamiliar food someone offers you). So why, given the fallibility of confirmation testing as a way of vetting a proposition, do we prefer it to the much more reliable method of falsification? It would seem at first glance that confirmation is easier than falsification because it’s less rigorous and requires only a perfunctory rifling through one’s memory banks rather than the formal process of scientific vetting. But actually, for confirmation to be appropriately strong, one would have to dredge up dozens, if not hundreds - or more, who really knows - of experiences that confirm, without doubt, the conjecture at hand.



Corroboration relies on a vast bank of knowledge, consisting mostly of experience, from which we select the events and outcomes that we believe are applicable to our hypothesis. However, we’re much more likely to choose events and outcomes that support our hypothesis rather than challenge it. Why? One possibility is that we find the task of filtering our memories much easier than the task of using deductive logical strategies to come up with statements that correctly reflect the states of things in the world, and so we deliberately choose friendly occurrences in order to eliminate the need for additional hypotheses to explain the events that disprove our “rule”. Another possibility is that we may trust experience over logic, or we know (and this is true) that we are better at sorting through our experiences than thinking rationally. Therefore, we may place more trust in our ability to verify claims by appealing to past experience than in our ability to formulate strong, valid rational arguments to support a claim - and rightly so. Corroboration puts us into contact with our past and our humanity and reinforces the link between what we’ve experienced and what we believe. Elimination is more difficult, more alien, less engaging, and regularly forces us to admit that a piece of information we’ve inculcated is in fact incorrect, leading to uncomfortable cognitive dissonance. So it’s easy to see why we might prefer the familiar conditions of corroboration to the antiseptic conditions of logic. But this does not mean we have a license to ignore reasoning in favor of empirical verification.



While novice players seem to rely on a single hypothesis to formulate their strategy, expert players rely on a mental “file” of hundreds (perhaps thousands) of moves and strategies to inform their play. When a game changes course, the expert player re-shapes his attack, readjusts his play and second-guesses his assumptions about the course of the game. Instead of relying on a single “good” strategy, the expert advances by sifting through a plenum of known strategies and filtering the good from the bad according to the development of the game in play. Good players, in other words, do not win games by optimistically projecting themselves ahead in the game in order to identify winning strategies, but by scientifically considering and rejecting bad strategies. Their careful hypothesis testing brings to mind the edict that good science does not attempt to corroborate theories. It attempts to destroy them, to falsify the prime theses.



Like corroboration, falsification also relies on vast banks of knowledge. However, this strategy demands that instead of selecting facts and outcomes that support a given conjecture, we instead seek out instances that violate it. The advantage of falsification is that, as Popper points out, it eliminates false premises, thereby disallowing further conjectures that inevitably lead to false conclusions.



While falsification is indeed better at weeding out incorrect assumptions, it relies on the assumption that all information is (theoretically) falsifiable. It is true that most information is of the sort that is falsifiable (say, statements regarding historical facts or the principles of fixed systems), but this is not the case for all information. Some statements can be proven neither true nor false within a given set of parameters, such as propositions indefinable within a certain system of the sort identified by Alan Turing and Kurt Godel (which led to the development of the latter’s “Incompleteness Theorem”). There’s also a whole class of scientific hypotheses that can be neither confirmed or denied because they can’t be tested, and a host of metaphysical conjectures for which there is no real information to accept or reject. Furthermore, falsification itself provides no strategy for “moving forward” - that is, producing a statement of intent based on one’s conclusions about a certain situation. It contains no instructions about how to make the all-important leap from choosing to acting. It eliminates bad guesses, but cannot certify good ones. Because of its inability to select a course or narrow viable options down to a manageable set, falsification is inappropriate as a general strategy. If falsification was able, as a principle, to limit our choices of actions to a workable few, then it would be advantageous. But, used as a general principle for determining a rational course of action, it is capable of eliminating only a few possibilities at a time, and at such mental cost that it hardly seems efficient. If there are literally thousands of possibilities open to us in a given situation, falsification seems radically inefficient.





Strong chess players behave like good scientists, falsifying their own hypotheses in order to strengthen their position. Karl Popper proposes that falsification is better than confirmation for strengthening a hypothesis, because no matter how much evidence you collect in support of a theory, there’s always the possibility that it might be refuted by some additional information you haven’t yet encountered. Falsification turns uncertainty into verification by seeking out facts that explicitly disprove a hypothesis. A conjecture supported only by confirmation is never absolutely certain - even when supported by a bounty of evidence - but a hypothesis canceled out by a single piece of falsifying information is unquestionably invalid. Because corroboration can lead to the acceptance of untrue ideas, it’s a less reliable strategy for building arguments to support the beliefs that guide our responses to the typical demands of world (and help to illuminate its more corners). Corroboration is irrational and tends to confirm an individual’s own biases, lending credibility to ideas that may be based on bad information or questionable interpretations. But it’s the strategy we overwhelmingly use to make sense of the world.



While falsification is not an appropriate strategy for largely unregulated decision-making with high numbers of possibilities, it is appropriate in highly regulated situations where strategies can only be formulated according to a fixed and narrow body of possibilities - which is where chess comes in. Eliminating bad guesses is only a helpful strategy when it is able to generate a range of “good” guesses small enough for the human brain to process one-by-one. Choosing the “best” from among these “good” guesses then becomes a scientific process where each in turn is tested against a fixed body of consistent information (the rules of the game) and a flexible and expandable mental “file” of mostly consistent information (game training and previous games). This file contains facts which, while still open to interpretation, are easily sortable according to context and applicability. Thus, using the very specialized information contained in these two caches, a good player can make appropriate use of the method of falsification to select workable strategies that strengthen his position in the game. The novice player, while he may grasp the rules of the game, nevertheless lacks the specialized reserve of moves and strategies that the experienced player has, and therefore falsification is not an appropriate method of play for him.



So it seems that limitation is, paradoxically, the key to chess proficiency. But this kind of limitation refers to limits in the mathematical sense; limits that act as boundaries which differentiate chess knowledge from other kinds of knowledge, and not the “limited” knowledge of a chess beginner. The kind of “limit” I’m referring to is categorical, and its function is to distinguish chess-specific knowledge from generalized knowledge. In other words, the stronger the distinction, the better the player. Rational thinking must go hand-in-hand with “chess thinking” in order to produce winning strategies. Computer chess foundered for a number of years because the machines relied only on algorithms to determine their moves, and were easily outwitted by players with the ability to creatively employ strategies they learned in play or by studying the strategies of others. When computers were given the ability to cross-check their opponents’ moves with a database containing thousands of actual games played by chess masters, their success rates improved dramatically. Employing simple deductive logic led to losses, but using deduction to select from among a fixed reserve of possible moves produced, if not victories, at least much better performances. Because the novice player’s repertoire is limited, he is forced to fill in the gaps in his knowledge using a mix of logical deduction, psychological probing and educated guessing, all of which are likely to produce different conclusions between which he has little ability to distinguish the best.



The upshot? Falsification as a general strategy for confirming knowledge is inappropriate because it cannot produce a sufficiently narrow range of possible actions within a timely manner. Corroboration, despite its tendency to produce only propositions that confirm our irrational hypotheses, is oftentimes the only strategy available to us. However, I propose that we can make the most of our tendency to corroborate and reduce error by sorting our experiences carefully into narrow and well-defined categories, thus replicating the conditions of “expertise” that allow experienced individuals to operate successfully within a certain domain. As a strategy for determining best actions in the highly structured context of such pursuits as chess, calculus, politics or military engagement, falsification - paired with expertise - is a particularly potent instrument for picking out strategies that lead to success.




This Is Science… I think


Everything I Needed To Know About Science, I Learned From The Side Of A Van, Part 2: Hippies





What is a hippie? Where the fuck did they come from? I mean, right? First there was Glenn Miller, and then The Beatles, and then all of a sudden half a million screaming idiots are fucking in mud puddles on a farm while somebody named “Janis Joplin” who looks like she crawled out from beneath a laundry pile at the Ziegfield Follies and sings like a belt sander is warbling something about a “color TV”. And high up in a building on Madison Avenue, someone is already trying to figure out how to sell them socks and Coca-Cola. I bet if you voted for Nixon in ‘68, you were still pretty fucking confused as you watched news footage of that year’s Democratic National Convention with hundreds of dirt-coiffed teenage Jesuses stumbling through clouds of tear gas in Lincoln Park clutching their hirsute girlfriends (and if you looked close enough, you might just make out the seated figure of Norman Mailer through the window of a hotel bar.) From whence comes this tribe of wayward babes? Why are they making victory signs? And what the hell does “abandon the creeping meatball” mean? Well, huh. Let’s see. I’m going to try and isolate the historical and socioeconomic factors that precipitated hippie culture to the best of my ability, because I firmly believe all superficial youth movements can be fully explained in terms of commercial, political and criminal trends. So let’s get started!



Perhaps the aspect of hippie culture that most alarmed the silent majority, far more worrisome than their association with the Black Panthers or Allen Ginsberg, was their preoccupation with beatnik drug culture and their affinity for an experimental drug called lysergic acid diethylamide. 1965 had seen the re-publication, in cheap paperback editions, of Kerouac’s On The Road and Huxley’s The Doors Of Perception, both of which identified drugs as a key to self-exploration, while other drug literature such as de Quincey’s Confessions Of An English Opium Eater, Baudelaire’s Artificial Pleasures and Crowley’s Diary Of A Drug Fiend remained in print or could be found at secondhand bookshops.



Used bookstores allowed unemployed youths to obtain these literary classics for a pittance. Shops selling used goods proliferated during the Great Depression and increased in popularity during the period of wartime austerity that followed. Thus, when the Coca-Cola generation came around, many of these shops were still in business, and only too willing to cater to the younger generation’s taste for lurid drug memoirs, surrealist fiction, French and German philosophy, political treatises, science-fiction and fantasy. Surrealist tomes like Bataille’s Story Of The Eye, Breton’s Magnetic Fields and Artaud’s collected works explored worlds of bizarre, inexplicable visions and dream logic that bore a distinct resemblance to the hallucinogenic experience.



In 1963, the patent on LSD held by Sandoz Laboratories expired, freeing up the drug’s production. The first private individual to manufacture LSD was Owsley Stanley, Grateful Dead road manager and former ballet dancer who synthesized mass quantities of the drug at his home in Berkeley. He initiated a series of “Acid tests” which, along with Ken Kesey’s mobile drug gatherings and parties at Timothy Leary’s upstate New York home, helped introduce young America to LSD.



The sudden increase in the popularity of marijuana coincided not only with the re-publication of classic Beat Generation literature such as On The Road and Naked Lunch, which portrayed drug use as a cornerstone of the bohemian lifestyle, but also with the rise of Mexican drug cartels in the early 1960s. By industrializing the sale and production of marijuana, the competing cartels made it easier than ever to obtain grass in America.



The Spanish-speaking world also had another intoxicating import to offer young Americans in the form of Ernesto “Che” Guevara, the martyred leader of the popular uprising in Cuba. Guevara’s travels through Latin America provided him with moral and intellectual enlightenment as he encountered the abject poverty brought on by an indifferent capitalist regime, which integrated nicely with the plot of Hermann Hesse’s Siddhartha, required reading for the Fuck You, Dad Generation. His name and image became a calling card for like-minded liberal youths, and lives on as the internet password of thousands of erstwhile history majors across this quiet, greening land.



1965 also saw the re-publication of the Lord Of The Rings trilogy. Tolkien’s thinly-veiled references to the modern evil of fascism and his assertion of the power of the individual to prevail against vast and impersonal forces tied in nicely with the hippie belief in grassroots activism and mistrust of entrenched and unimpeachable power structures. The gloomy pall and atmosphere of suspicion that characterized the travelers’ journey as they ventured into the unknown resounded with the mood of Cold War paranoia in America and the country’s growing distaste for government hedging with regard to the Vietnam conflict. The vibrant descriptions of The Shire as a bastion of fellowship and traditional, communal living that bore a strong resemblance to life in Britain in the years between the decline of the Holy Roman Empire and the rise of the English monarchy, with gentle nods to pagan rituals and recreational drug use, also endeared it to the hippie.



Catch-22 and Cat’s Cradle were published in 1962 and 1963, respectively. The former establishes a language for expressing deep concerns about the increasingly oblique policies and obscure motivations of the American government as it became clear, through a series of astonishingly inept intelligence breaches, that the war in Vietnam was not at all what it appeared to be. The latter looks askance at industrialization and scientific advancement, a critical stance that appealed to advocates of sustainable living, and Vonnegut’s signature droll humor struck a chord with a culture learning to use the vocabulary of the absurd and the surreal to address the modern ills whose great magnitude rendered them inexpressible and uncontainable using the lexicon of traditional social discourse established by Greek and Roman thinkers many millennia before the phrase “communist aggression” could be understood in at least 17 distinct ways.



Ecology gained a subversive dimension with the publication of 1962’s Silent Spring, an examination of pesticide use by the U.S. Department of Agriculture. The book concluded that the widespread use of chemical pesticides endangered both wildlife and humans. I haven’t read it. Sounds boring.



The hippie owed his affinity for health food and vegetarianism not only to the sustainability philosophies espoused by books like Silent Spring, but also to the remnants of a German youth movement called the Wandervogel. Formed at the turn of the century an alternative to civic societies and clubs with strict, formal codes, the Wandervogel romanticized pagan ritual and the nomadic lifestyle, embracing natural living and individuality as a reaction to the urbanization and homogenization of modern European life. Southern California, the seat of the hippie renaissance, became home to many former Wandervogel youths who migrated to America as Germany’s political climate worsened in the years preceding the second World War. They transplanted their natural lifestyle to a new environment, opening health food stores, raising organic food, and indoctrinating their new neighbors into the ways of the Wandervogel. (NB: Eden Ahbez, the songwriter responsible for the extraordinary “Nature Boy”, later covered by Big Star and Gandalf, was associated with the California wing of the Wandervogel. He lived, for a time, on the hillside beneath the gargantuan Hollywood sign. He also “sat in” on the Beach Boys’ 1966 Pet Sounds sessions, a very vague credit extended to many famous and half-famous people which I am still unable to interpret even with the benefit of extensive liner notes.)





As children, the hippie/Yippie generation would probably have watched the sinister proceedings of the House Un-American Activities Committee on TV as dad sipped bourbon and derided the “pinkos” and mom smoked quietly on the couch. The sinister rhetoric of Senator McCarthy and his ilk must have seemed even more ugly to a child with little concept, if any, of nuclear comeuppance, snow-bound gulags or good old-fashioned buggery, and the impassioned speeches of the good souls defending themselves against ludicrous assertions of their intent to commit by way of endorsing all these evils must have made quite an impression on a young mind still searching for moral grounding. Thus the post-war generation was already primed for the surge in volume from the political left as civil rights, gender equality and disengagement from Vietnam all came to a tumultuous head in the mid-’60s. Intellectuals broke from the Communist Party to form the New Left, citing, among other things, the Party’s out-of-touch response to the Hungarian uprising of 1956. The fledgling political organization soon found its candidate in “Clean Gene” McCarthy, a shorn but gently sarcastic statesman from Minnesota who had a knack for connecting with the hippies. The “good” senator McCarthy inspired some hippies to “go square” in order to attract ordinary Americans to the campaign, but ultimately lost the Democratic candidacy in 1968 to Hubert Humphrey.



Oh, I probably forgot a bunch of stuff. That’s because I hate goddamn hippies.






Everything I Needed To Know About Science, I Learned From The Side Of A Van

1. The Wizard.





This is a wizard. There are no such things as wizards. However, wizards and wizardry are important concepts for understanding the evolution of political power in the ancient world.



In The Golden Bough, Sir James Frazer recognizes two basic categories of thought which correspond to two distinct kinds of “magical thinking”, or belief in magic as a force of change in the natural world. One is the idea that “like produces like”, or that an effect ultimately resembles its cause. According to this “law”, which Frazer calls the Law of Similarity, a desired effect can be brought about not only by the typical chain of causation that normally brings it about, but also by the imitation of one or more of its aspects. Thus a “rain dance” is an attempt to produce rain by means of a dance made up of motions that symbolically represent some aspects of rain, and an effigy of a person made from wood or clay is considered to be an instrument of control over that person’s fate. Frazer calls this type of magic he  “homeopathic” or “imitative”, as the relationship between its practice and its intended outcome is one of mimesis or imitation. Another type of magical thinking addresses the special relationship between objects that are or have been in physical contact with one another, and which continues to hold even when they are separated. Frazer designates this way of thinking “contagious magic”, as it posits a sort of identity between objects transmitted in the manner of a contagion when they come into contact with one another. Thus, the condition of a lock of hair or an extracted tooth is said to be a portent of what fate awaits the person from whom it came because both the object and the person share a common past. These two kinds of magic are united under the heading of “sympathetic magic”, so called because it involves a basic belief that objects can act on each other from a physical or conceptual distance through a secret “sympathy”, a hidden connection not affected by the changing conditions of time and space or subject to the rigid laws of causation.



Wizard, medicine man, witch doctor and magician are all names that refer to an individual who practices magic in public, for the benefit of the whole community. Even though the sorcery he (or she) practices is a form of deception - whether the sorcerer acknowledges it or not - the power and prestige associated with this position is undeniable. Because the welfare of a tribe or society is thought to depend upon the practice of magic, the wizard acquires both influence and repute through his practice of magic, and may rise to a position of great influence if his contributions are deemed sufficiently important. Thus, the roots of kingship may be found in the relatively humble station of village witch doctor or itinerant magician, and the lineage of kings can often be traced back to a single enterprising individual (or a succession of them) who exercised control over a population through the practice of magic. It could be said that while the wizard’s rituals are ultimately perfunctory and impotent, the effect they have on the public (especially if the wizard is particularly good at producing explanations as to why his “magic” sometimes fails) is one of almost magical effectiveness in itself. The magician, in other words, produces, by sleight of hand and dint of confusion, the illusion of his own ability to produce supernatural changes in the world. He practices his magic not by manipulating the rules of space, time and causation, but by manipulating the attitudes and beliefs of his constituents regarding his ability to produce these magical effects. To this end he must, at times, appear to actually produce magical changes corresponding to the tribe’s demands, but ultimately his project is to maintain his public image and so retain the power of his position.

To win the confidence of a tribe or population in this fashion requires a great deal of cunning and fortitude; thus, the rank of wizard or magician tended to comprise a particularly acute segment of the population. The wizard needed not only mental agility, but a keen grasp of rhetoric in order to convince the public that the magic wrought has indeed produced the desired effects (if it has not) or that there is, in the event that such a ruse is unfathomable, a perfectly good explanation for why it has not. In addition to rhetorical agility, the wizard must also posses a great insight into the psychology of his people (for this is the basis of rhetoric), the ability to play-act convincingly, the intellectual and physical strength to fend off the challenges of aspiring wizards who think themselves clever enough to execute his job, and above all, a general lack of scruples.



After explaining this aspect of wizardry, Frazer then produces an account of the birth of democracy as a dreary entropic process in which the balance of power reaches a dull equilibrium when the ablest man is dragged down by the weakest on account of their political equality. Skimming over a push for natural accession of the fittest that would have Goebbels pissing his jodhpurs in glee, and a defense of tyranny as the ultimate in paradigm shifts, we find in this passage Frazer’s opinion on the importance of wizardry in a nutshell: Without a station that allows the ablest men to rise to power, displacing the ossified power structures that serve to maintain the status quo and limit the freedoms of the individual, society stagnates in stultifying climes, inching its way painfully forward (if at all) by means of a ponderously slow series of cultural epicycles in which all actions lead back, more or less, to their origins. Wizards, medicine men, sorcerers, witch doctors and magicians, who lead by cunning, innovation and ruthlessness, provide an escape from this dreary equilibrium. The social ascendance and political aspirations of the wizardry provided the blueprint for rule by ability, not simply heredity or senescence.



Even if we cannot count wizards among our ranks, there are still an astounding number of natural processes that seem to violate what we think we know about the world. Many of them have been addressed in theories, or have at the very least been the subject of various hypotheses, but the typical 21st century grunt has heard (or understood) very little with regard to these arguments, which the non-scientist may regard as hopelessly esoteric. Einstein’s theory of General Relativity, couched in metaphors of clocks and rocketships, is still largely inaccessible to the average person (myself included). Without the benefit of lucid explanations (which must exist somewhere, or hopefully will be forthcoming), these processes are just as shrouded in mystery as the motion of the tides or the rotation of the heavens were to our ancient counterparts. Here are a few natural processes that have been vetted by science, but still require something very close to “magical thinking” from the average non-scientist:

  • Wizards are often credited with the ability to turn ordinary matter into pure energy, and vice versa. Literature (and oral tradition) is rich with accounts of wizards conjuring electrical storms from cloudless skies, starting fires by causing ordinary objects to spontaneously combust, and the like. While these feats are certainly superhuman, they are not unnatural. Einstein’s famous equation E=mc2 describes just such a conversion, which is constantly taking place all around us (though usually at a fairly safe distance). The conversion of matter into pure energy operates on macro- and microscales, as well as the human scale: it fuels the burning cores of stars, creates sources of nuclear energy for human use and powers experiments designed to reveal the smallest machinations of the universe. E=mcis an expression of the equivalence of mass and energy under relativistic conditions: The energy (E) of an object at rest is equal to the product of its resting mass (m) and the appropriate conversion factor to transform from units of mass to units of energy. Einstein proposed that this equation could address the property of relativity by making E equal to the mass of an object times the speed of light in a vacuum, with this last term (measured in km2) acting as a conversion factor between energy (joules) and mass (kilograms). According to the equation, as an accelerating object increases in mass, becoming imponderably heavy as it nears the speed of light (186,000 miles per second), its total energy increases proportionately. This means that the faster an object is moving, the more able it is to stay on course when another object bumps into it. When an object in motion comes into contact with another, slower-moving object, some of its energy (and therefore mass) is transferred to the imposing object. The second object therefore becomes a little “heavier” as it receives an energy transfer from the first, and the original object becomes a little “lighter” as its velocity decreases.
  • Another power commonly ascribed to wizards is the ability to move through solid objects, often without altering their trajectory at all. In this respect, photons are like the “wizards” of the subatomic realm. Photons are elementary particles without mass or electrical charge, the controlled emission of which constitutes a ray of light. They can interact with each other over long distances, a property which is called quantum entanglement. Indeed, contagious magic may have a very real counterpart if the theory of quantum entanglement is correct!

"Hey babe, you must be a vitreous magnesium-rich metamorphic amphibole… ‘Cause you’re definitely CUMMINGTONITE."

"Hey babe, you must be a vitreous magnesium-rich metamorphic amphibole… ‘Cause you’re definitely CUMMINGTONITE."