blatherskite: (Default)
“Yet our universe has one good gift for everyone, a generosity beyond all measure: We are wrong. Often and loudly and in embarrassingly gigantic ways, each of us is an idiot.”—Robert Reed, Empty

Reed’s words come from a science fiction short story that has little or nothing to do with learning, but like readers everywhere, I choose to exercise my vexatious right to reinterpret an author’s words and take them wildly out of their original context. (Yes, I’m aware that much like asking “what could possibly go wrong?”, this is a risky tactic for a writer. It's rarely wise to encourage readers to second-guess what you're saying.) Here, my new context for the quote will be about learning.

Over the years, I have been frustrated by and envious of those for whom learning was easy. They were the ones with the prodigious memories—the ones who could read something once with scant attention and remember all salient details. For me, learning has always been hard: it required a conscious effort, often a prolonged one, and more often than not, began with misunderstanding or an egregious error that left scars. Learning the hard way, in fact. And despite my envy of those who are spared those wounds, I’ve come to believe that the hard way is often better: it’s typically left me with a more lasting and thorough understanding, since erring and correcting those errors provides insights into both the right way and the wrong way. Those who learn too easily only learn the right way, and that’s only half of the picture.

I’m my own worst enemy some days. Indeed, back in 2005 when someone asked me how I could possibly be as productive as I was, I set out to figure out what I was doing right. The answer turned out to be sobering: it wasn’t what I was doing right so much as what I was doing wrong. I would often take an extraordinarily long time to recognize when I was being a bonehead and fix that behavior. Usually after being slapped upside the head by Life or one of its many agents. But being sobered up that abruptly apparently made a big impression, because once I started fixing the problems I was formerly tolerating, I kept right on ensuring that they stayed fixed. This led to what I call my “Captain Obvious” presentation, which I’ve given to various groups roughly every 1 to 2 years since 2005. It’s hands-down my most popular presentation because everything in it is (dare I say) completely obvious. Yet like me, most people aren’t doing the obvious until someone calls them on it. You can get the gist of the presentation from the paper I presented at the 2005 STC Seattle annual conference, Improving your editing efficiency: software skills, soft skills, and survival skills.

Nowadays, I still find myself reluctant to sit down and force myself to scrutinize what I’m doing and why it isn’t working so well. I don’t necessarily fix things immediately, but I at least keep a Word file full of lists of things I need to fix. When the frustration level finally breaches my tolerance threshold and I’m motivated to do something about it, I have a list of things to do. Apparently I've taken my own advice and started learning despite myself.

Why do we learn so well from bonehead maneuvers such as tolerating a problem instead of investing time to solve it? I speculate that it’s precisely because those errors are “loudly and embarrassingly gigantic”, and therefore make more of an emotional impression. Most people don’t notice or care about the small errors, which are soon forgotten, but the big ones leave scars, particularly if friends and family helpfully remind us of our most dumbass moves at every opportunity. It’s that extra poignancy (in the same etymological sense as “poignard”, a dagger) that makes the lessons memorable. Much though I’d prefer to find ways to learn less painfully, I’m generally proud of the scars I’ve accumulated along the long road that I’ve traveled; they remind me of just how much I’ve learned along the way.

Of course, not all learning involves stupid mistakes or mistakes that leave scars. A lot of learning is gentler, and comes because the rewards are differently intense. Raking autumn leaves is one of my favorite activities; great exercise, and I can slip into a zen state for an hour or so until I’m done. But in my current home, I have this blasted honey locust tree in the back yard that drops millions of tiny little elongated leaves, like shorter versions of willow leaves, that slip between the tines of the rake with each stroke. Almost without realizing what I was doing, I learned that using two strokes gathered the majority of the leaves: the first stroke aligned most of the leaves parallel to the direction of the stroke as they slipped between the tines so that a second stroke, at roughly right angles to the first one, caught the leaves across their long axis and stopped them from escaping.

I've also learned the simple pleasures of pausing every so often and enjoying the fall sunlight. This afternoon, for instance, just as the sun was approaching the horizon, I was rewarded by a glimpse of the white underside of a gull, gilded (gullded?) by the golden rays of the near-horizon sun, there for an instant and then just as quickly gone upon the wind.
blatherskite: (Default)
A student recently interviewed me for a class assignment, and asked several intriguing questions about the past and future of technical communication. Since the questions seemed to be of reasonably broad interest, I thought I'd republish them and my answers, with a few updates and afterthoughts.

A note before we begin: I'll mostly confine my thoughts to Western Europe and its descendants, since my knowledge of "Eastern" aspects of these questions is far weaker.

(Q1) What significant event in history made technical writing a force to be reckoned with?

I suspect this evolved as a result of several interacting processes rather than being triggered by a single key event.

First, of course, you would need to have an evolved body of "techne" (accumulated knowledge of a craft) that must be passed on in fixed form. Depending on how strictly you want to define techne, this could date back as far as the first codified Western religious works, the Jewish Old Testament and subsequent Christian Bible being most familiar to Westerners. I discuss the concept of techne in more detail in my article Technical documentation in Canada. If you prefer a more technological definition, you could instead date this to the industrial revolution, when large clanky things became sufficiently common to require some form of documentation. Other types of techne would have evolved technical writing somewhere between these times.

Second, reading and writing would have to become sufficiently widespread that formal documentation would become relevant. That is, there would need to be a sufficiently large body of readers to justify the effort of creating documentation. (Alert: gross oversimplification coming:) Before roughly the Renaissance, I suspect that most knowledge was passed on orally, though with many exceptions such as bodies of religious teaching that had to be standardized to ensure a consistent doctrine; that would date back to (in Christianity) the formation of the Christian church, and to a much older time for early Jewish writings. There would have been much interesting documentation in "the East" in the form of Greek histories and medical manuals, as well as their Islamic equivalents, before the Renaissance. And, of course, a rich written tradition in China and Japan dating back millennia.

Third, there would need to be a means of mass distribution of knowledge, since manual copying of large manuscripts was prohibitively time-consuming (thus, expensive) for a large reader community. Thus, the development of the printing press by Gutenberg (again, I emphasize in the West) is the obvious watershed moment for mass technical communication. China and probably India would have developed comparable technology far earler, but I'm not expert enough to provide specific examples.

If you enjoy Victoriana at all, you'll be fascinated by Sydney Padua's graphic novel The Thrilling Adventures of Lovelace and Babbage. It's a charming, hilarious, irreverent, and remarkably insightful foray into many aspects of the Victorian period. Specifically, it's a fictionalized story of the development of the first computer, which is the technology most people associate with technical writing these days.

(Q2) The year is 2051, what role do you envision technical writing or communication playing a significant role?

The world is becoming increasingly complex to navigate, and the growth of that complexity is accelerating. I see an increasing need for clear, concise communication of complex concepts and thus, a growing need for skilled technical communicators. I see no evidence that engineers, scientists, and other subject-matter experts are becoming better writers, so there will be a growing need for good writers and good editors who understand how to translate between an expert's brain and their audience's needs.

I think we'll see more artificially intelligent assistants for our work. The modern spellchecker software, for instance, is no better than the spellcheckers I used nearly 30 years ago, and would benefit greatly from a complete overhaul, particularly in terms of support from software that understands context. But I don't see us being replaced by software in the next 40 years. It may happen some day, but my understanding of currrent artificial intelligence is that we've still got a long way to go before that happens.

(Q3) What are some of the prevalent gaps in this field of technical communication?

Particularly with organizations such as the Society for Technical Communication (STC), I see far too much focus on tools and technology, and not enough focus on core communication skills. We will definitely need technologists, and much of what we do is difficult without help from modern tools, but the communication skills are far more important: a communicator will always find a way to communicate, but people who are only tool users often prove to be incoherent and incomprehensible.

A second and highly significant gap that STC and other groups have failed to close is the gap in understanding of the need for our profession. Everyone learns to write, however badly, in grade school, and written material is ubiquitous -- though most of it of low quality. Thus, there is no sense that writing is anything special. But good writing is remarkable and instantly recognizable; it turns on lights that bad writers don't even know exist. We desperately need to find ways to make employers understand our value. To my knowledge, nobody is doing this, and it makes workplace life difficult for us at times. We need to find ways to clearly demonstrate our value to employers and to society in general. I've written about our need to escape from the shadows and make what we do known to our workplace colleagues. This is crucial for survival in the workplace.

(Q4) In the former of intelligence or private sector how will technical communication play a powerful role?

I'm not sure I understand the distinction being drawn between "intelligence" (government sector?) and the private sector, so I would answer this question from first principles: The role of technical communication is always to bridge the gap between the minds of the producers of information and the minds of its consumers. I always describe technical communication as "translation" because of this role, but that ignores a more important underlying aspect: both producers and consumers of information do so to satisfy certain needs, and those needs may not align. Good communicators help the producers understand the needs of their audience and communicate those needs to the producers of information.

For "intelligence" in the context of spycraft and national security, there will be an additional matter of ethics: spies seek to conceal information, not share it, so there will be thorny ethical questions raised by intelligence-related communication. These include whether it's acceptable to conceal information from those who need it and the crucial importance of getting the message right when a failure to understand can endanger lives or even nations.

(Q5) As it pertains to technical writing and communication, what is it that you do daily (job)?

My primary work is as a scientific editor. Almost all of my clients are research scientists who have English as a second or third language, but who need to communicate in English to reach their international audience. I was trained as a scientist (physiological plant ecology, genetics, community ecology), so I understand most of the science that they're doing. Over the last 28 years, I have developed expertise in helping them communicate that science clearly: I ensure that they fully explain their social and scientific context, explain what they did (research) within that context and how they did it, clarify what the results were, and explore the implications for society and other scientists.

You could probably call this developmental and substantive editing, although they rarely bring me in at the start of a writing project to help them outline, plan, and refine their manuscripts. As I do these things, I do a lot of basic copyediting for grammar and clarity, but I also do a lot of information design work to help them with their tables of data and with their data graphics. I do a bit of French translation occasionally, and an occasional bit of technical writing (e.g., my books on effective onscreen editing and writing for peer-reviewed journals).
blatherskite: (Default)
I subscribe to the weekly Brain Pickings newsletter which, despite its somewhat unsavory name, provides a weekly feast of interesting new ideas. In a recent issue (I’m always running a few weeks behind), I came across a fascinating discussion of the concept of reality. The part I’d like to focus on here is a quote by physicist David Bohm:

"Reality is what we take to be true. What we take to be true is what we believe. What we believe is based upon our perceptions. What we perceive depends on what we look for. What we look for depends on what we think. What we think depends on what we perceive. What we perceive determines what we believe. What we believe determines what we take to be true. What we take to be true is our reality."

Although this might be splitting definitional hairs, I consider this to be the functional equivalent to a zen koan because it captures some of the same elusiveness of concept that the best koans provide: the tighter you try to grasp the concept, the more it slips from your fingers, at least initially. Like the best quotations (with which I sprinkle my Twitter feed), it gives one pause to think and often sets off that flashbulb of sudden enlightenment that the Japanese refer to as satori. The brilliance of a koan or of a humble quotation is how it conveys so much once you unpack it, all in such a wondrously concise format.

What I love about the Bohm quote is how neatly it circles back upon itself, like Ourobouros swallowing its own tail. That journey neatly captures the concept of how difficult it is to pin down the nature of reality: there appears* to be an objective reality we cannot escape (day continues to shade into night as night de-shades into day, whether or not we choose to believe in this cycle), but how we perceive and describe that reality can be so subjective as to make objectivity seem like an impossible goal. More intriguingly, we pass through a cycle in which every new thing we learn changes how we perceive reality, and that changing perception can lead to still more new insights that again change our perception. It’s a wondrous, never-ending cycle of change.

* I say “appears” to acknowledge the fact that though I fully believe in this external reality, I can provide no evidence that would persuade an extreme solipsist that it “really” exists. An old favorite quotation provides some defense for my position: "Reality is that which, when you stop believing in it, doesn’t go away."—Philip K. Dick

To tie Bohm’s koan to the subject of this blog, namely communication in all its forms, I return to the concept of subjectivity. Specifically, one of the things I’ve learned from science -- possibly our best tool for approaching a “true”* description of objective reality -- is that all “truth” is provisional. As our metaphysical tools (ways of thinking) and physical tools (measurement instruments) improve, we gradually discard old beliefs that provided only a blurry image of the truth and replace them with sharper images that provide a more accurate and holistic image of the truth. As in the case of Zeno’s dichotomy paradox, we sometimes seem to never quite to get there, but if we continue long enough, we may some day do as the bodhisattvas do and achieve that final truth. Or not. The universe is a complex place, and the deeper we dig, the more we find.

* To avoid a long and messy argument about the nature of truth, I retreat to a paraphrase of Dick’s quotation: truth is what doesn’t change when you stop believing in it. If you want to delve into the great and murky depths of this subject, check out Wikipedia’s handy summary

When we try to communicate with someone else, particularly over issues that are freighted with emotional overtones, it can be very difficult to take a step back and remind ourselves of the subjectivity of what we see as truth. One of the subtle problems that disrupts communication is that how we perceive a truth affects how we perceive what our communication partner is saying about that truth. They’re going through the same process. To communicate successfully, it’s necessary for both partners to understand where they and the other partner are coming from and how this might constrain our ability to hear what the other is actually saying. One of my favorite quotes captures this concept neatly. Psychologist George Miller (in a January 1980 interview in Psychology Today, notes: “In order to understand what another person is saying, you must assume it is true and try to imagine what it could be true of.” It’s perhaps helpful to remind our communication partners of this important concept so they can make an effort to understand what our statements might be true of.

Communication, at its finest, allows us to recapitulate the journey of discovery embodied in Bohm’s koan, with which I began this essay: our beliefs and perceptions change iteratively and dynamically as they clash with the beliefs, perceptions, and thoughts of others. In that clash of beliefs, we collaboratively establish a newer, clearer, richer shared image of the reality we share.
blatherskite: (Default)
With the recent release of the movie The Martian, it seems timely to review the possibilities of sustaining human life on Mars in the long term. A recent journal article (Wieger Wamelink et al. 2014, Can Plants Grow on Mars and the Moon: A Growth Experiment on Mars and Moon Soil Simulants) suggests that it may be possible to grow crops in Martian soil. This is an important issue for those of us who dream of Martian colonies (and for science fiction authors who write about such dreams) because it will be crucial to grow food locally; the distance from Earth and high transportation costs mean that a colony would have to rapidly become self-sustaining.

So is this likely to be possible in the near future? In this essay, I'll discuss several key issues. Because each issue that I raise would require a separate essay to cover adequately, please note that I have (over)simplified many points to focus on the essence. Details will vary widely among crops, soil types, and so on. Please treat this essay only as a "general principles" overview of the subject.

A side note before we begin: why not hydroponics?

It's reasonable to ask why growing crops in soil is necessary in the first place. The answer is complex, so I'll simplify. Even though many crops such as tomatoes can be grown using hydroponics, this may not be possible for all crops because of the large space requirements (e.g., wheat, corn). The larger problem is that this won't be possible for some time at the scale of a potential Martian colony. The biggest obstacle is the prohibitive cost of shipping a sufficiently large collection of hydroponic gear to Mars; given current technology levels, it's implausible to suggest that we'll be able to manufacture the equipment on-site for the forseeable future. (3D printing may solve this problem once we're able to set up "mining" operations on Mars to provide the necessary raw materials.)

The need to produce viable seeds during many generations of hydroponics is also a concern. Nowadays, hydroponic crops are harvested and then re-established from new seeds, but those seeds are usually grown in conventional farm fields. The micronutrient composition of the solutions used to nourish hydroponic crops is a related and important issue. You've probably noted that hydroponic vegetables taste different and worse than field-grown vegetables, likely due to differences in the micronutrient supply; wine growers emphasize the importance of soil qualities as a key part of the "terroir" effect. To the best of my knowledge (and I emphasize that I have not performed a literature review to support this point), researchers have not tried to create a completely self-sustaining ("closed cycle") hydroponic crop production system and confirm it's viability over periods of years. Over long periods (several years), hydroponic crops may suffer from subtle nutrient deficiencies that eventually sabotage the crop or decrease its utility to humans. On Earth, we'd never notice this problem because our diet is primarily composed of field-grown crops.

Thus, finding ways for terrestrial crops to survive and grow in simulated Martian soil is an essential research goal, and this preliminary study by Wieger Wamelink et al. is great news for Mars fans. Unfortunately, the results are hardly definitive due to gaps in our current knowledge and some significant deficiencies in the Wieger Wamelink study. Some of these deficiencies are methodological problems that should perhaps have been fixed, and others represent defensible limitations of the study based on logistical and other constraints. (Specifically, it is never possible to study all relevant factors in a single research study; sometimes a career is too short.) Based on my training as physiological plant ecologist, here are some thoughts on the article, its limitations, and its implications for a future Martian colony:

Making light

First and foremost, the article does not explore the consequences of the light intensity and quality on the surface of Mars. Neither is a trivial issue, since Earth's plants have evolved for millennia to optimize their use of the amount and quality of the available light. It would not have been logistically possible to investigate these factors within the scope of the authors' study, so I'll frame this section in terms of needs for future research.

Light intensity on Mars would clearly differ from its values on Earth, but I can't speculate about the magnitude of the difference because this would involve a rather complex calculation: the amount would decrease greatly as a function of increased distance from the sun following the inverse-square law, and would increase due to decreased light absorption by the nearly nonexistent Martian atmosphere (i.e., there would be less light interception by molecules of air and water). Large and dense dust storms are common on Mars, and this would lead to frequent changes in the amount of light.

The spectral characteristics of the light would also change due to the different characteristics of light interception by the Martian atmosphere. Plants are keenly sensitive to subtle variations in light quality; these variations govern all plant developmental phases and plant responses to many types of environmental stress. Photoperiod (the length of the daylight period) is also an issue, since the lengths of the Martian day and of Martian seasons differ (respectively) significantly and greatly from those on Earth; plants have internal "biological (circadian) clocks" that govern every phase of their development, and those clocks are keenly sensitive to daily and seasonal changes in light intensity and quality. On this basis, Martian farmers will either need to provide large amounts of supplemental color-adjusted light, or will need to breed plants that are optimized to take advantage of the available sunlight on Mars.

A final issue is that of ionizing radiation, which will be present at much higher levels on Mars due to the lack of a dense atmosphere to absorb this radiation. This may kill plants directly, or cause ongoing mutations that will eventually kill the plants or render them useless as a food source. It may also significantly affect essential soil microbes (discussed in the next section). Providing shielding won't be trivial or maybe even possible.

As a result of these factors, plants will likely have to be grown underground, with artificial illumination. This will require significant and difficult engineering.

Soil microbes

Most (possibly all) terrestrial plants either require or benefit strongly from the presence of a diverse soil microbial community, and the characteristics of that community have resulted from millennia of coevolution between the plants and organisms in their rhizosphere. Examples include mycorrhizae, nitrogen-fixing bacteria, and others. I'm not even considering essential macroorganisms such as earthworms and collembolans, which play an important role in maintaining soil structure and promoting nutrient cycling. Although the authors quite properly did not sterilize the soils they used in their study, neither did they have the resources to monitor long-term changes in the microbial community and the consequences for plants. These changes are likely to be significant, since the composition and functional characteristics of this community are strongly determined by interactions among various characteristics of the soil and the plants being grown in the soil; in turn, these changes determine the suitability of the soil for the plants. Of particular note, they can significantly affect the risk of disease development.


The issue of pollination is not trivial. Most important crops require some combination of insect and wind pollination. Bee-pollinated crops include some of our most important crops, and bees are just one example of insect pollinator; many other taxa contribute. To grow such crops on Mars, we'd need to confirm that the pollinators could survive under Martian conditions. Manual pollination is feasible on a research scale, but not on the scale required to grow enough crops to sustain a colony. The survival of such insects under Martian conditions is by no means guaranteed. As a specific example, I note that the geothermally warmed and powered greenhouse I recently visited in Iceland requires ongoing imports of bees from the Netherlands; I did not ask the owner, but it appears to be impossible or economically impractical to cultivate the bees in Iceland rather than importing them. Warm-climate readers may find this difficult to credit, but Iceland is actually far more hospitable an environment than Mars would be.

Soil simulation

The biggest problem I had with the Wieger Wamelink study is the use of "simulated" soils. This was required because we simply don't have access to real Martian soils, and the authors did a good job of choosing an appropriate simulation material. But in evaluating their results, it's essential to note that this is a simulation, and the underlying assumptions may turn out to be unrealistic.

The first problem with a simulation approach is that we don't know how common the simulated soils are in the Martian soil. That is, we have surveyed only the most miniscule proportion of the Martian surface, and even on Earth, soils are highly spatially variable. Thus, we don't know how good their choice of a simulation material will prove to be. The authors note this and other problems; the lack of strong evidence of abundant nitrogen in Martian soils is a particular concern, since nitrogen is crucial for plant growth, and modern crops require enormous amounts of supplemental nitrogen to produce their current high yields.

An additional problem is that what I have read of Martian soils suggests that perchlorates and other strongly oxidizing materials are abundant, which would make things rough for both plants and their associated microbes. The high aluminum levels that seem common in Martian regolith could also cause enormous damage to crops, particularly in acidic soils; this is one of the reasons why "acid rain" on Earth is so damaging; it mobilizes toxic aluminum compounds.

Another problem is that, to the best of my knowledge, the analyses of Martian soils have examined "total" amounts of elements, not the amounts of "plant-available" versions of the elements. This is an important difference, since the available level is often far less than the total level. The authors of the article suggest they analyzed total element contents, which will not provide an adequate prediction of long-term plant growth. (In their defence, it's very difficult to estimate how non-plant-available forms of elements would change into available forms as a result of chemical weathering and biological activity. That would require a long and complex series of additional studies.) Even if Martian soils would support a first crop, it's not clear from the present results whether they would support subsequent crops, since key nutrients would be removed from the soil with each harvest, and would be restored primarily by adding human feces and urine (suitably composted) to the soil before the next crop. Unfortunately, such nutrient cycling would be difficult to implement in practice; such systems are notoriously "leaky", with significant ongoing losses to the environment that would have to be replaced somehow.

The biggest problem would be losses of organic matter, whose final fate is to be converted into carbon dioxide or methane. The former would be taken up by the plants, though not without some loss; the latter would represent a net loss of carbon from the system. One consequence of this point is that you'd have to add organic matter to the soil at least as fast as it is depleted by biochemical and chemical degradation; this is necessary to increase the soil's water-retention ability, preserve the structure of the soil, and provide a nutrient supply for essential soil microbes. Human wastes would be used for this purpose, but since those wastes would initially come entirely from food supplied from Earth, it's likely to take some time to get crops growing well enough to become self-sustaining from a carbon perspective.

[A look back: I neglected something obvious, namely that you'd also need to have a significant source of CO2 to "feed" the plants. Haven't done the math, but I'm not sure that a colony of humans would generate enough CO2 from breathing to support an area of crops large enough to feed them. So you'd likely need supplemental CO2 from somewhere.]


Water availability is a particularly serious issue. Water evaporates rapidly in a low-pressure atmosphere, even on Earth; on Mars, with less than 1% of Earth's atmospheric pressure, evaporation would be even faster. It's hard to imagine creating a dome large enough to grow plants on the Martian surface that would be completely airtight; even creating something underground would be difficult. Thus, you'd need a robust system for recapturing or replacing lost oxygen and water for the plants to survive in the long term. Oxygen is relatively easy, since there are large quantities of oxidized materials on the Martian surface, and all you'd need to liberate the oxygen is a large supply of (solar cell?) electricity and appropriate engineering.

The apparent presence of liquid but very salty water on the surface of Mars gives hope that water could be supplied locally, but any liquid sufficiently salty to retain its water in the near-vacuum of the Martian atmosphere, rather than having the water lost to evaporation and sublimation (as occurs in Earth's deserts), will be extremely difficult to desalinate. Water frozen into ice at the Martian poles seems abundant, but transporting it to the likely location of a colony would not be a trivial task.

Crop maturity

It's not clear why the researchers didn't grow all the plants to maturity (they stopped after 50 days) to confirm that they could successfully produce the desired final crop (seeds, fruits, etc.). Most of the agricultural researchers I work with do this even for studies based on terrestrial soils, since a great many factors can prevent successful seed production even if flowers develop and seeds appear to be produced; for example, unsuitable temperatures before, during, or after pollination can result in the production of nonviable seed. In addition, the researchers did not analyze the nutrient quality of any of the seeds that were produced, which is a significant challenge for future research. (To be fair, such analyses are clearly beyond the scope of the authors' study; I mention this point solely in the context of a need for future research.)

It's all very well to produce seeds, but if they won't germinate or prove to be severely nutrient-deficient, particularly in terms of micronutrients, this won't end well for Martian colonists who must consume them.

In conclusion...

All this being said, the Wieger Wamelink et al. study is important because its results don't rule out the possibility of growing crops in Martian soil. That's a very good thing should we want to establish a colony. But as this essay shows, there's still much research to be done before we can believe that Martian crop production will be possible on a scale large enough to support a colony.
blatherskite: (Default)
Scientists and technologists have good intentions in spades, but sometimes you wonder if they ever leave the house and mingle with real people. Take, for example, two well-intentioned but doomed ab initio efforts to put some ethics back into a particular branch of technological endeavor, namely the development of artificially intelligent robots. A brief definitional note before we get going: “Intelligence” is a slippery term to define, and in practice, the definition usually comes down to “whatever standard I can evoke that will make me seem more intelligent than you or allow me to treat you as a lesser being”. For artificial intelligence, the standard definition relies on the Turing test, which (in greatly simplified terms) states that something is “intelligent” if one cannot distinguish it from a real human. With the footnote that intelligence is multidimensional, not something that can be gauged with a single evaluation or a single evaluation metric, this test remains a broadly useful criterion, and one that I will adopt. In short, we can summarize this test as “a difference that makes no difference is no difference”.

The first problematic initiative aims to eliminate the use of artificially intelligent robots in military contexts. Even if you don’t believe that the Terminator franchise represents the inevitable endpoint of research on this technology, you have to admit that the Future of Life Institute makes a compelling case for why we should not go down this particular dark road. To me, the most compelling reason is that replacing human warriors with technological surrogates seems to eliminate the human cost of warfare and thereby makes war seem insufficiently horrible to make prevention a priority.

In practice, this is only true for the aggressor, and then only if they can remain a safe distance from the chaos. We’ve already seeing how shortsighted this perspective is in the high ”collateral damage” associated with the use of advanced military technologies, most recently in the form of remotely operated drones. This damage should not have been at all surprising given the spectacular failures of previous “this will solve everything” technologies such as precision bombing to eliminate or reduce civilian casualties, but we humans are nothing if not expert at ignoring inconvenient realities (cf. the abovementioned “in practice” definition of intelligence).

In reality, civilian casualties are inevitable in modern warfare, and have increased greatly over the last few millennia (in absolute numbers, if not proportionally). The problem is that conflicts rarely occur in neatly delineated killing fields, like sports stadiums located far from civilians. It’s simply not credible to propose that modern warfare will only be fought in carefully sequestered arenas where the combat can be kept far away from civilians. Pretending that artificially intelligent robots would solve the problem is nothing more than a layer of abstraction, intended solely to make the unpalatable palatable by hiding its ugly reality. The terminology itself illustrates the problem: instead of the accurate phrase “death of non-combatants”, or the simpler “murder of innocent civilians”, “collateral damage” only serves the goal of abstracting human tragedy so that we can ignore its ethical consequences.

Eliminating the use of artificially intelligent robots in warfare therefore has much to recommend it. Yet there are two problems. First and most serious, those who make the decision to declare war on others rarely, if ever, experience the consequences personally. As a result, they have no incentive to avoid declaring war because someone else will pay the price for them. Eliminating robots from the equation does nothing to solve the problem. Second, the history of technology is the history of finding ways to convert even the most seemingly innocuous technology into a means of killing or wounding other people, and the history of warfare is the history of conflicts escaping nice, tidy boundaries.

Warfare is only a specific form of the violence we humans seem to do instinctively, and it has deep roots in all cultures and all historical periods. It’s not something we’re going to abandon or confine to killing fields that will spare civilians, no matter who or what does the fighting. Hence the sarcastic and deeply pessimistic title of this essay, “good luck with that”.

The second initiative aims to eliminate the use of artificially intelligent robots in sexual contexts, and specifically to eliminate “sexbots” -- robots designed primarily or solely for use as sexual surrogates. This one’s a little harder to understand, at first glance: such devices could eliminate the spread of sexually transmitted diseases, provide companionship and possibly emotional instruction to people who may not be able to sustain a healthy relationship on their own, and greatly reduce (though probably not eliminate*) sexual slavery or the abuse of adults and children. Yet as in the case of warfare, adding a layer of abstraction to something as fundamentally human as our sexuality lets us avoid dealing with the real problem. In addition, there’s considerable evidence that humans (at least a small proportion thereof) will copulate with just about anything that moves, and many things that don’t; this second initiative will face a hard time combating that urge. This leads to the “good luck with that” conclusion for this initiative too.

* Sexual abuse is not always about sex; often it’s about power over the weak, or sadism, or other unpleasant aberrations of human psychology.

Another concern, raised by SF writer Elizabeth Bear in her chilling short story, Dolly (about the abuse of a sexbot and its consequences), is the intelligence part of artifical intelligence. Whether in matters of warfare or sexuality, it’s hard to imagine that it would really be more ethical to shift abuse from our fellow organic beings to non-organic but otherwise intelligent beings and rationalize this abuse as acceptable. “Intelligence” is relative, not an absolute and binary scale that provides nice distinctions. If you accept that proposition, the possession of intelligence should entitle any intelligent being to the same protections we would grant ourselves, including protection from sexual abuse. Not everyone accepts this as being valid; to some, there is a unique spark (let’s call it a “soul”) that makes humans qualitatively different from anything else, no matter how intelligent. Yet even if we accept their distinction as valid, the long and horrible history of torture suggests that “good luck with that” is again the correct response to any suggestion that we ban such behavior.

So should we throw up our hands in despair and ignore these issues? The story of King Canute is often misrepresented as an example of human arrogance. In the incorrect version of the tale, a powerful but arrogant king attempts to turn back the tide and fails. This failure has spawned the idiomatic phrase “attempting to stem (halt) the tide”, with the implicit meaning of a doomed fight*. Yet men and women of good conscience should attempt to stem the tide, even if their struggle seems doomed. Unlike Canute, we have some hope of stemming the future tide of misuse of artificially intelligent robots, at least for most nations and for some time.

* In the original version of this tale, the King’s goal was to demonstrate the importance of humility to his courtiers: some things cannot be stopped by even the most powerful humans, good or bad intentions notwithstanding. That's the wrong message for the sake of this essay.

As proof of what is possible, I offer the example of the 1925 Geneva Protocol, an early attempt to limit the use of chemical and bacteriological weapons in warfare. Though the protocol has by no means eliminated the use of such weapons, the contrast with the use of chemical weapons (toxic gases) during World War I and earlier uses of smallpox-contaminated blankets in an effort to eradicate tribes of Native Americans is dramatic; rather than toxic gases and microbes becoming a standard part of the military toolkit, the use of such tools remains the exception, and one that attracts horror and often reprisals from the international community. People still die, often horribly, during warfare, but the conventions have greatly reduced the frequency of two horrible ways to die. The non-use of nuclear weapons since the end of World War II is another promising sign, though recent events in Iran and North Korea give me cause for hesitation.

As a cynic, I don’t think we’ll suddenly evolve sufficiently ethical behavior on a global scale to win this fight. Thus, I see no plausible way to avoid the creation of warrior robots and sexbots. But the successes in limiting other abuses makes the fight no less worth fighting. We may not be able to stop either form of abuse, but we may at least limit its scope. “Good luck with that” is not an acceptable response when so many lives, whether natural or artificial, will be affected.
blatherskite: (Default)
Just finished Cherie Priest's Maplecroft: the Borden Dispatches, and like the other examples of her writing that I've read, I can recommend this one highly.

Maplecroft is a carefully researched "what if?" about the historical figure of Lizzie Borden ("Lizzie Borden took an axe, gave her mother forty whacks..."), crossbred with a Lovecraftian "bad things happen to good, bad, and indifferent people because the universe at best ignores us and at worst, actively hates us". The basic premise is that Lizzie wasn't a crazed murderer, but rather someone who fell into Lovecraft's world and was forced to defend herself and her loved ones as best she could, with wholly inadequate tools. It's far more restrained linguistically than Lovecraft, and (for obvious reasons) not misogynistic, and therefore it's more deeply affecting. The story is told as an epistolary (i.e., via letters and journal entries), which proves to be a very effective way of introducing many POV characters who don't always understand what the other characters are doing or thinking. Priest combines the best of first-person narration with unreliable narrators, and does so masterfully.

Lizzie, though our main protagonist, is accompanied by several other key viewpoint characters. Like a late-Victorian Buffy the Vampire Slayer with her "Scoobie gang", Lizzie courageously fights the forces of darkness that have chosen to destroy her family, while simultaneously dealing with the "mundane" and in many ways equally horrible ravages of "consumption" (her sister's losing fight with tuberculosis)*. Like Buffy, she and other characters make many well-intended mistakes (some tragic) that have profound consequences. I won't spoil things by telling you how the story turns out, but it's a deeply human tale of a struggle against impossible odds and incomprehensible forces. As in the best Lovecraft, there are costs and consequences for everyone who gets drawn into the darkness. Nobody escapes completely intact, no matter their intellect or virtue.

* A very interesting parallel if you want to go all lit-crit.

One non-spoiler false note: Because the 1890s are a key period during which the scientific enlightenment really got rolling good and hard, several protagonists try to explain what's happening to them in scientific terms, even as they learn that this worldview doesn't match their increasingly Lovecraftian world very well*. This is fine so far as it goes; we humans use our mental models of how things work to understand our world, and the scientific worldview was a key mental model at this time. Where this goes astray is when Priest enlists it as a valid mechanism for dealing with the inexplicable and fighting the unfightable. To me, it would have been more effective to leave the inexplicable unexplained and show how the mental model failed; when you cling to a reassuring belief (here, that anything can be understood through the application of logic and science) while the world falls apart around you, the horror is compounded when that belief proves false. This authorial choice doesn't in any way ruin the book, but it diminished some of its punch towards the end.

* Brian Lumley wrote a bunch of stories in this vein. They're enjoyable works on their own terms, and a nicely executed response to Lovecraft (i.e., humans *can* fight successfully against madness and a hostile universe through rationalism and technology or technologized magic), but as a result, I felt they lost some of their punch. Charles Stross strikes me as doing a better job of mashing up science and Lovecraft, particularly in the deeply chilling A Colder War.
blatherskite: (Default)
Just finished reading "Corporate Espionage", by former NSA analyst and current "white hat" hacker Ira Winkler. It's about the many ways both hackers (those who penetrate computers for the fun of it and bragging rights) and crackers (those who penetrate computers for malicious purposes) sneak into companies and extract potentially billions of dollars of proprietary information -- and in the case of banks, sometimes literal millions of dollars.

But it's about much more than that: it's a detailed treatise on how spies of all sorts sneak into (penetrate) companies by exploiting vulnerabilities. And the most serious vulnerabilities are almost inevitably human, not technological, though some of the technological vulnerabilities have human help in remaining vulnerabilities. Understanding the way people work and respond to both co-workers and other people lets hackers and crackers use "social engineering" techniques to gain access to areas where they don't belong and escape with astonishing amounts of information.

The book was written in 1997, so it's a bit out of date in some areas (e.g., Winkler discusses modems as a major point of vulnerability), but the basic principles remain valid (now it's cable modems or routers that are key points of vulnerability). It's also a fascinating updating of Bruce Sterling's "The Hacker Crackdown" (1992), but written by someone who lives the life rather than by a journalist. (No diss at all intended for Sterling, who really did his homework.)

What's really disturbing is how little has changed in the 20-some years since these books were published. Although Winkler doesn't provide hard or verifiable (i.e., referenced) data in most cases, billions of dollars were being lost annually even back in the 1990s, and the losses have probably grown by at least an order of magnitude since. Anyone who doubts this should contemplate the recent rash of penetrations of U.S. government computers, which have full-time and highly motivated security staffs protecting them; Edward Snowden; the recent antics of Chinese government-sponsored crackers; and the whole "Anonymous" movement.

What's even more disturbing is that we're currently in a "cold war" situation, with most of the hacking and cracking being done by amateurs or by professionals with very limited goals (e.g., stealing specific trade secrets). One can only imagine what would happen if a true cyberwar erupts.

And imagination is why I'm sharing this review here. Winkler's book is a great resource for writers if your only prior experience with cracking comes from Hollywood, which rarely gets any of the details right. (I've just started watching "Mr. Robot", which looks to be that rara avis -- something where the writers actually understand what they're writing about. Thus far, it looks excellent.) Winkler gets the key details right, and in a very disturbing way. But he's not just a fear-monger. He concludes the book with a long list of advice on how companies and governments could be doing better to protect their -- and our -- data.

Highly recommended source material if you want to write about cracking and cyberwar. Or if you just want to suggest the need to improve your employer's protection by anonymously leaving a copy of this book on the president's or CEO's desk.
blatherskite: (Default)
One of the things you notice (at least if you're paying attention) is how life falls into certain rhythms. The daily cycle from waking to sleeping is most obvious, and the annual cycle is most obvious in the turning of the seasons. But whether or not you've been paying attention, these and many other rhythms affect your work life, and that, in turn affects your "real" life outside of work. Rather than fighting these patterns, it's wiser to find out how to "go with the flow" and use them to your advantage.

For example, I have a very clear daily pattern. I usually have a mug of half-caffeinated coffee with breakfast, then once it's kickstarted my brain enough for me to be recognizably sentient, I go check e-mail, reply to the simple messages, and generally get my day's tasks sorted out. Then I indulge in a second mug of coffee to bring me up to full mental speed before I begin my real daily work. A single mug of full-caffeine coffee right at the start would arguably be more efficient, but I enjoy coffee for its own sake, not just as a performance-enhancing drug.

While my brain is coming up to speed, I focus on doing some of the more mechanical editorial tasks that don't require full sentience. These are things like responding to more challenging e-mails that actually require some thought and checking the literature citations and References section in the day's manuscript. Once I'm fully up to speed, I dive into the challenging work of figuring out what my author is trying to say and finding ways to help them say it. Mid-day, I'll go out for a walk to do any errands that need doing. Towards the end of the day, as my ability to concentrate wanes, I'll leave the computer and do some stretching exercises for half an hour -- kind of a moving meditation, without being anything as sophisticated as actual yoga or tai chi. Refreshed, I return to finish any remaining work, and when that's done, shut down the computer, go do aerobics or weights, and finish the day with Madame.

Understanding this rhythm in how my body works lets me match the nature of the work to the amount of sentience available for me to allocate to that work. During pre-sentient periods while I wait for the coffee to kick in, I get a lot of work done that doesn't require much in the way of brainpower; once the coffee is working, I focus on the work that requires focus. It would be a waste of time and effort to try accomplishing the really demanding stuff while my brain isn't up to the task, and a more serious waste of time doing low-brainpower work while my brain is working at peak efficiency. Accounting for how my brain and body work makes me far more efficient and effective than I would be if I tried to fight those rhythms.

Annual rhythms are more complex. Most of my editing clients are researchers, and pretty much all of them live in the northern hemisphere. So their work schedules are affected both by the same annual turn of the seasons I experience and (for university researchers) by the ebb and flow of the northern hemisphere school year. This pattern is further complicated by whether they work primarily in the lab (including on the computer or in the library) or in "the field" (i.e., outdoors somewhere).

Lab scientists are only weakly affected by the turn of the seasons. Instead, they are strongly affected by things such as the annual funding cycle. For example, if they've budgeted a certain amount of money for editing and publication of their research papers, they need to spend that money before the end of the fiscal year, and that annual budgetary period creates deadlines for their writing. My government authors tend to have a 1 April* start to their fiscal year, so I know they'll be doing their best to spend their remaining budget in February and March; that means they send me a ton of work at this time. Then there's a lull as they pause to catch their collective breath and resume the cycle. If they work at a university rather than a government or private institute, they also tend to try to finish their work before school starts (August and January) or after it ends (December and May) so that they aren't being distracted by their teaching requirements or the demands of their students.

* The irony of government budgets being determined by April Fool's Day does not escape me.

Field scientists are also constrained by the school year if they work for a university, but more importantly, are governed by the seasons. Because my work relates primarily to environmental and ecological subjects, they need to work during the time when their study subjects are alive and growing or moving around. Having done some field research myself, I'm also keenly aware that it's more fun being out in the field during clement summer weather than at -30C in the winter, and scientists being human, they tend to schedule their research for the summer even if it could (in theory) also be done during the winter. So summer is usually a lull period for them from a writing perspective, but they get quite busy once they return home in the fall (September onwards), with computers full of data to analyze. They also get quite busy in the month or two before they leave to begin the new season's field research -- peer reviews of a manuscript typically take months, so it's efficient to schedule those reviews while they're away from the office -- so March and April also tend to be quite busy.

Over time, I've learned that these patterns determine my work load at any given time of year. Knowing the patterns lets me take measures to even out the flow. For example, I send out a warning e-mail a couple months before the typical busy periods to tell everyone that they should reserve my time well in advance, or ideally send me work before the busy period begins. This lets me allocate the available time to each of them who's likely to need it and reduces the number of really long days when I need to work on two manuscripts simultaneously to meet client deadlines. Conversely, before predicted slow periods, I send out an e-mail suggesting that these periods would be a great time to work with me because they won't be competing with everyone else for my time. There are still, inevitably, heavy and light periods, but they're less heavy and less light than they might otherwise be. And I'm less stressed dealing with the heavy periods.

This proactive management of my schedule also lets me do things like arranging vacations during periods when my work load would ordinarily be lowest. That minimizes the amount of income I'd lose by not being available during a busy period, and equally importantly, minimizes the amount of work that arrives in the weeks before I leave and that accumulates while I'm away.

If you're a freelancer, I encourage you to do a similar analysis of your workflow and use the results to better manage your life. If you're an employee, the advice is equally valuable, but you'll have different busy periods; your company's budgeting period may use the calendar year rather than 1 April, the work of your colleagues may be governed by the annual schedule of important trade shows or government grant application periods, and so on. Learning these annual patterns is the first step in finding ways to control your work schedule -- or finding ways to go with the flow rather than fighting it.
blatherskite: (Default)
Just finished editing a paper about embryonic development, in which the authors present a batch of cross-sectional data showing the relative positions of certain embryonic structures (e.g., the heart) at different times during embryonic development. It's creepily fascinating the way things move around; for example, the embryo's heart moves downwards from the neck region, passing through arm structures en route to its final destination in the thoracic cavity.

I was also fascinated to see that the authors didn't seem to have thought beyond the print communication model, which of necessity requires the presentation of static images. But most journals now encourage authors to publish "supplemental information" on their Web site; this is information that would be impossible to publish in the printed version of the journal. Reasons for this impossibility include a requirement for color (which remains very expensive to print), the massive size of a dataset (e.g., large genetics databases), or -- most interesting to me -- information that would benefit greatly from the multimedia capabilities of the Web (i.e., sound and video).

Once every couple months, I find myself encouraging authors to take advantage of this "new" possibility. In the context of the embryo paper, the authors used 3D modeling software to create static anatomical images showing the positions of various structures, which is great as far as it goes. But they didn't consider the possibility of providing the actual models as supplemental material, which would allow readers of the paper to download the models and move through them the way doctors move through CAT and MRI scans to observe the characteristics of a structure in three dimensions. Neither did they use the software to produce an animation that shows how the anatomy evolves progressively during embryonic development.

Such visualizations would be an important tool for helping readers understand both anatomy and its changes over time. Yet the authors didn't think of this! It's a sufficiently important omission that I devoted an entire chapter to this subject in my recent book, Writing for Science Journals.

If you're a communicator (writer, editor, other), it's always worthwhile stepping back for a moment and asking yourself whether you're a little too comfortable inside your particular box, or whether stepping outside that box would reveal powerful additional tools for effective communication.
blatherskite: (Default)
Before diving into the meat of this essay, let me define a few terms related to how software is developed and tested:

Audience analysis is how you begin the development process: you spend some time thinking about how people are likely to use your software, confirm these suspicions with real users (if possible), and you plan accordingly; that is, you design the software to support its users during the common tasks they will be performing with the software. Ideally, you use the some form of Pareto optimization, in which you prioritize the subset of the product's features that are used most often or that provide the maximum benefits to the maximum number of users, then add more features as time and resources permit. I've written extensively on this subject.

Alpha testing is what the programmers do, often with help from a company's "quality assurance" or testing staff, before they consider that a product is ready to be unleashed on its eventual users. In an ideal process, you ensure that all the software features work as required (based on the audience analysis), and fix all the bugs you discover while you're validating the code that you've produced; you might do this yourself, or possibly with a colleague's help, depending on the software development culture at your employer. In practice, programmers and quality assurance staff rarely have time to do either task to their own satisfaction, because software release schedules are driven by marketing, not by the programmer's desire to release a product they can be proud of.

Beta testing is what happens after alpha testing. In this step, you release your best shot at producing a stable, usable product to a subset of its users, most of whom are not employed by your company. This group then tests the software to destruction to make sure it works. The principle is that several hundred people (sometimes thousands) banging away at the program in ways you hadn't anticipated will reveal subtle flaws you didn't notice during your alpha testing, thereby giving you time to fix them before you release the "final" version of the product. (I put "final" in quotations because as any computer user knows, software is never final. There are always subtle bugs that go undetected or that are sufficiently rare a company figures it can wait to fix them. And each new round of patches, fixes, and updates tends to introduce new errors. This isn't professional incompetence; it's inherent to the nature of any complex system, and software is pretty damned complex.) Beta testers receive the software for free during the testing period, and often receive a free license for the final shipping version of the software to compensate them for their efforts.

If all three steps go well, the product that is finally released for sale is stable and usable. How well does Microsoft meet these criteria? In terms of its Windows operating system, amazingly well, particularly given the (literally) more than a billion people who are using one of the various versions of Windows. In terms of Microsoft Office? For the Windows version (WinWord), quite well. I've been profoundly impressed by how well Word has worked with each new version that I've installed: it's become more stable, less buggy, and (often but not always) easier to use. But on the Macintosh side (MacWord)? Not so much. MacWord has been bad enough that when I train editors to use Word for editing, I always start with the recommendation that they use the Windows version, even if they use a Macintosh as their primary computer. WinWord has simply been a better product for the nearly 20 years I've been using Microsoft Word. MacWord 2016 promised to bring the Mac version up to parity with WinWord. Did they meet that promise?

Caveat: I haven't used MacWord 2016 myself, since I'm waiting for the first major round of bug fixes before I install it on my work computer. I'm basing this review on the demos I've seen, a few online reviews, and my wife's experience with the first shipping release of the softare. The TLDR (too long, didn't read) version: My wife, who's been working with word processors for something like 30 years, got so frustrated with Word 2016 that after a couple days of cussing it out, she abandoned it and returned to Word 2008 -- which is also severely flawed, but is at least stable.

In terms of audience analysis, MacWord 2016 is a nice move in the right direction. One of the horrible problems with MacWord has been how radically it differed from WinWord in its interface. Microsoft has always justified this based on the outdated notion that users of the two operating systems have different expectations, an argument that has some merit. However, this argument ignores three key points about real-world use of Word: First, most computer users are now proficient at switching interfaces when (for example) they move from Internet Explorer to Firefox or Chrome. The difference between Windows and the Mac is now largely irrelevant, and is often less disruptive than switching between programs under the same operating system. Second, and much more important, releasing versions of Word with sometimes radically different behavior means that even after users figure out the interface differences, they can't rely on the software to behave identically. Third, and most important in the working world, these two factors create an impossible situation for trainers, who must master at least two different versions of the software (Mac and Windows) and find ways to teach the two groups of users about the differences. At a crude guesstimate, my book on onscreen editing is about 25% longer than it would need to be if the interface and behavior were consistent between versions.

With Word 2016, Microsoft has finally taken some significant steps to make the user interface look more similar between Mac and Windows, but the software still doesn't behave the same on both platforms. A large part of the problem results from a failure in the alpha testing process, which let profound bugs slip through that should never have been revealed to the eventual audience, and by insufficient time allocated to beta testing, which would have given the programmers time to solve those problems.

In terms of alpha testing, Microsoft seems to have fallen on its face -- again -- with MacWord. Many features that they broke when they first released Word 2011, and took months or years to fix, were broken again in the first release of Word 2016 (e.g., not displaying the correct tab of the Ribbon for the current context, a "garbage collection" error that generates the alarming message that there's not enough disk space to save the open file, even if a ton of space is free). This wouldn't be such a problem if the beta testing process had been given enough time to detect such problems and let the programmers fix them. But the current version of Word 2016 is a shipping product: you have to pay money for it. That's entirely inappropriate for a product this buggy.

Bottom line: Microsoft should be ashamed of this performance, and should not expect customers to pay for such a shoddy, unfinished product. My recommendation that professional editors should stick with WinWord still stands. In a month or two, after Microsoft releases the first major service release for MacWord to fix these egregious problems, I'll be willing to risk installing it, and will then report back on whether they've come close to parity with WinWord -- or even produced a usable product.
blatherskite: (Default)
The best humor, as in the case of later books by Terry Pratchett, is both funny and profound. I can't think of any current writer who comes close to Pratchett in combining the two attributes, but Jasper Fforde makes a valiant effort, a fact the latest book in his "Thursday Next" series reminded me of.

I'll skip irrelevant background context and come right to the point: Fforde reminds us of the need to devote a little time each day, even if only a few seconds, to pondering something about the world and to laughing.

Pondering reinstills a sense of the wonder of the world. I get some of that from the scientific perspective in my daily work: the deeper you delve into ecology, the more ramifications and recomplications you discover. In the words of Johnathan Swift, "So nat'ralists observe, a flea / Has smaller fleas that on him prey; / And these have smaller fleas to bite 'em. / And so proceeds Ad infinitum." Or as the Hindu world myth would have it, "it's elephants all the way down". But there are many wonders other than science to be experienced if you pause a moment to ponder; my favorite recent insight was into just how weird it must be to be a house cat, and to be owned by something inexplicable that is close to 20 times one's own size. Imagining what that must be like near to blew my mind. Then there are the daily miracles of a lover's smile and the touch of her hand.

Laughter, of course, has its own rewards, particularly when shared. My favorite recent geek joke was Fforde's throwaway line about a new compression format for jokes, JAPEG*. Sheer brilliance! But humor can be much more profound, as in the case of Québecois comedian Martin Matte**, who recently delivered a funny and touching tribute to his father. My favorite bit was his reflection, driving home from the funeral home with his father's ashes in an urn in the passenger seat, about whether he could legitimately take the commuter lane reserved for cars with two or more passengers. And whether his father would be "burned" if a cop stopped them.

* For the less geeky: a "jape" is a joke, and JPEG is the current standard for compression of photographic images.

** And pause a moment to appreciate the beauty of a world that has an École Nationale de l'Humour in it.

Laughter has the additional virtue that it makes the Forces of Darkness gnash their teeth in frustration. There are days when I think they're winning, but it does my heart good to deny them the satisfaction of making me resent it. There are virtues to a heroic death, but given the low likelihood of such an outcome from a humble editor's life, I'll be happy to die with a laugh on my lips and the sound of grinding teeth in the cosmic background.
blatherskite: (Default)
Writing about artificial intelligence requires one to deal first with the thorny issue of human intelligence*, since it’s helpful to start with an idea of what you’re discussing to provide context and set the evaluation criteria. Getting to a useful working definition is difficult, since it’s tempting to fall into tautology: as the beings who are tasked with the need to come up with a definition, it’s perhaps inevitable that we’ll define it in a way that shows us in the most favorable light. You can see this thought process at work in how ethologists (scientists who study animal behavior) have historically kept moving the goal posts each time some animal is found to be able to accomplish one of their sacred “human-only” skills.

* The story goes that Mahatma Gandhi was once asked what he thought about Western civilization, and that he replied with Shavian wit: “I think it would be a good idea”. Sadly, the same statement might be profitably applied to human intelligence.

With the caveats that all such lists are provisional and that a good science fiction author (or psychologist or ethologist) will be able to propose interesting exceptions to each of the following criteria, here’s my starting point for a list of the key attributes that indicate intelligence:

  • tool use: the ability to go beyond the limits imposed by one’s body by finding or creating suitable tools, whether levers to move rocks or words to move hearts

  • symbol use: the ability to use symbols (words, facial expressions, gestures, whatever) to communicate information

  • abstract thought: the ability to describe a problem in a generalizable way that allows one to solve different but related problems, possibly based on pattern-recognition skills

  • learning: the ability to form and store memories in a way that allows them to be retrieved and compared (note that this also requires pattern recognition skills)

  • a sense of time: the ability to define cause and effect relationships requires an ability to understand what comes first (the cause), what follows (the effect), and the time that elapses between the two (i.e., a short time suggests causality, a longer time conceals causality)

  • goal-seeking behavior: the ability to set a goal and find ways to achieve it rather than merely accepting what the world gives us (note that this assumes an ability and desire to change our environment instead of being forced to change in response to it)

  • self-awareness: the ability to recognize oneself and distinguish between oneself and others

  • emotional awareness: the ability to recognize our responses to a situation (emotions) and deal with them

  • delayed response: the ability to consider the aspects of a problem before acting rather than just responding by reflex (i.e., judgment)

  • One thing that this list leaves implicit but that should be made explicit is the fact that there is both gestalt and synergy at work here: intelligence is both the sum of all these things and something greater than that sum. Simply checking off each item on the list is not sufficient to define someone or something as intelligent. For each of these criteria, humans differ from other animals primarily in the degree to which we can meet such criteria. For example, crows can solve complex problems that would baffle some humans, and cetaceans are arguably even more intelligent; for example, some dolphins have learned to cooperate with human fishermen.

    In addition to these criteria, I would claim that naturalintelligence has three “mechanical” requirements that will lead us directly into a discussion of artificial intelligence. The thoughts that are the hallmark of human intelligence require three things to function: an engine capable of operationalizing the thoughts (i.e., the human brain), fuel capable of driving that engine (i.e., knowledge), and some kind of software that forms relationships (e.g., language, mathematics) and that drives the engine to accomplish something. For artificial intelligence, the engine is a computer, the fuel is data (“big” or otherwise), and the software is (duh!) the software. For both human and non-human intelligences, one might productively argue that a fourth factor is necessary: the ability to compare experiences with others, whether through conversation (humans) or Internet connections (computers).

    With the same caveat that many people will move the goalposts as soon as it looks like a computer might be reaching the same level of these skills as humans, how close are computers to meeting the criteria with which I started this article? Let’s take each point in turn:

  • tool use: Computer-controlled manufacturing (e.g., assembly lines) and mobile robots (whether on Mars or here on Earth) are clear evidence that computers can use tools. They’re not yet capable of creating their own tools, with the limited exceptions described below.

  • symbol use: Codes such as the binary “words” that lie at the base of all computer software are proof of symbol use; more advanced symbol use is demonstrated by the increasingly powerful examples of image-recognition software (e.g., facial analysis, feature extraction) and by assistants such as Apple’s Siri and Microsoft’s Cortana, which can not only recognize simple speech but reply and take actions in response to that speech.

  • abstract thought: Thus far, computers have not achieved what we would typically consider to be abstract thought. But that statement depends heavily on what we consider to be “abstract”. For example, Mathematica can perform remarkable feats of mathematical problem-solving.

  • learning: Neural network software and genetic algorithms can clearly “learn” and preserve that learning, albeit with some assistance from us. Furthermore, there’s no reason (other than a lack of interest in doing so) why programmers have not designed operating systems capable of learning our preferences and adapting to them. We can manually force software to do this through our “preference” settings and control panels, but I want a computer that notices how I manually back up data to a flash drive every hour or so and offers to do this for me. Voice recognition software already learns our unique vocal characteristics, so this kind of adaptation is clearly possible.

  • a sense of time: Software is inherently time-based, and statistical software can detect correlations between events or factors, but the recognition of cause and effect relationships is still some way off.

  • goal-seeking behavior: This is the whole basis of machine learning, so clearly software can seek goals. It can’t yet define its own goals, however; we still tell it what our goals are and command it to meet those goals.

  • self-awareness: A computer’s ability to recognize itself and distinguish between itself and other devices is inherent to such things as the media access control identifier that uniquely identifies a computer’s network card and the IP addresses that underlie the URLs we type into our Web browser. Unfortunately, that’s a primitive talent compared to (for example) the ability to explore the implications of cogito ergo sum.

  • emotional awareness: To the best of our knowledge, we haven’t been able to program emotions into computers, in large part because we define emotions based on complex biochemical reactions that lead to complex neurological responses that haven’t yet been emulated in software. But fields such as the design of facial recognition software are advancing rapidly, and it won’t be long before our computers can recognize when we’re sad or happy based on a glimpse of our face.

  • delayed response: All modern software has the underpinnings of this skill, since the software is generally event-driven (i.e., it waits for something to happen and some criterion to be met) and then chooses how to respond based on a series of hardwired criteria for the appropriate response to any given event. Problem-solving and goal-seeking (optimization) software already exists, and will rapidly become more sophisticated. However, software generally can't improvise in response to events that were not anticipated by its programmer and it may be a very long time before it acquires this ability.

  • Bottom line? All the rudiments are in place for the evolution of a true artificial intelligence. We already have the software equivalents of idiots savants, which are very good at one or a few things and completely hopeless at every other task. Computer scientists are aggressively pursuing the goal of more sophisticated systems, and they’re likely to come up with increasingly sophisticated results. I have no doubt that within my lifetime, they’ll come up with software capable of passing the Turing test.

    But this leads us to the question of whether AI might evolve spontaneously. My take? It’s more likely than one might think. For evolution to occur, several criteria must be met:

  • Evolutionary pressure must be exerted on the organism: If there is no “need” for a group of organisms to change, then evolution is conservative and tends not to cause a change. Survival is the usual “need” that produces change: organisms that survive because they’re adapted to their environment pass on their genes (see the next point); those that fail to survive don’t pass on their genes. For software, the evolutionary pressure is imposed by computer scientists (the blind watchmakers of the software universe), but with the rapid advances being made in genetic algorithms and self-modifying software, it seems likely that setting goals for such software and weeding out software that fails to meet those goals will create enormous evolutionary pressure.

  • The organism must be able to change and retain those changes: In nature, the mechanisms that permit this adaptability and memory are genes. Since the whole point of being able to update and upgrade software is to change and retain those changes, computers are clearly capable of this function. Self-modifying code will take this to the next level.

  • Notwithstanding the previous points, random events are also important. Just as most mutations in the human genome are counterproductive or even fatal, computer programs are unlikely to improve or even continue functioning after experiencing a random change in the code. But it’s not hard to imagine software becoming orders of magnitude more robust than it currently is and becoming able to cope with and even benefit from such glitches.

  • Again, all of the rudiments for evolution are in place. But an evolutionary leap forward to something new and recognizably intelligent won’t happen soon. The current dominant model for software development is “command and control”: a programmer defines the behavior required by their software, and embeds that behavior in stone, and when the software doesn’t behave as desired, it’s debugged and redesigned until it does. But there are signs that we’re moving towards something more interesting, in which the programmer instead defines the goals and constraints and lets the software figure out how to accomplish those goals. When software becomes broadly capable of such feats of insight, we’ll see a true sea change, in Shakespeare’s original sense of “a sea-change into something rich and strange”.

    What we’ll then begin seeing is emergent behavior, as in Dolly, Elizabeth Bear’s brilliant and chilling story.
    blatherskite: (Default)
    Unless you’ve been living in an unusually arid desert the past few years, you’ve undoubtedly heard of cloud computing—or, more simply, “The Cloud”. But what exactly is The Cloud? It’s a nebulous concept, and that makes it hard to pin down precisely what it means. The variety of interpretations doesn’t help. So in this article, I’ll attempt to de-mistify the concept so you can think a bit more clearly about how it works and how you can use it safely.

    The original notion behind the cloud metaphor was that traditional computing was like a pot of water: everything was all together in one place, with all the limitations that this entailed, including the risk of losing all the water if someone knocked the pot off the stove. But imagine, The Cloud’s inventors proposed, if that water were more like the Internet: if you turn up the heat until the water boils, you get a cloud of steam—a bunch of dispersed droplets of water, that nonetheless function as if they were a single entity. At least, they do until a strong wind comes along and disperses them and they can no longer function as a single thing. That’s implicit in the metaphor, and we’ll come back to it presently.

    In more technical terms, The Cloud represents a widespread collection of computing assets that function together as if they were a single thing. Those assets may be computers and other hardware, software, data, or some combination of these categories of things. The individual components can also cover for each other, so that if one is lost or damaged, the others continue to function as if nothing happened. A primary advantage of cloud computing is that it’s multiply redundant: if one part fails, then other parts will take over and unless you’re responsible for administering that part of The Cloud, you’ll ideally never know anything happened. In this sense, it’s like the old notion of a redundant array of inexpensive (now, “independent”) disks (RAID).

    When this approach works, it works very well indeed. The Internet itself is a great example of the overall principle, since it was designed right from the start to be a distributed entity so multiply redundant that it could survive a nuclear war by rerouting traffic around any pathways or nodes in the network that were eliminated. Though nobody’s tried to test the nuclear survivability of the Internet in a real-world trial, there have been many glitches that did their best to take the Internet or large parts of it down—usually as a result of human malice (e.g., denial of service attacks) or human incompetence (e.g., cutting a backbone cable that conveys the majority of a service provider’s service while digging a ditch).

    Because of this power, cloud computing should be part of everyone’s strategy. For example, I use DropBox’s file storage service to automatically back up my data, so if the roof falls on my computer, my files will still be safe on the Dropbox servers*. There are many other advantages. For example, you gain access to a dedicated staff of hard-core geeks who take care of your part of The Cloud to ensure that it stays up and running and that your data remains safe. I take reasonable precautions with my computer and data, but I don’t tend to it 24/7. I’ve got better things to do with my days and nights. (So do the geeks, but they get paid for their 8-hour shifts doing this work.)

    * If you don’t have a DropBox account, contact me by e-mail ( and I’ll send you an invite. The service is free, and accepting an invite gets you 256 Meg more storage than you’d get if you sign up on your own. Then you can invite all your friends and earn 256 Meg of additional storage for each one who accepts your invitation.

    If The Cloud is so wonderful, why do I remain so intensely skeptical about it? In part, because of the hype it’s been attracting. All you hear about are the benefits, and nobody warns you of the drawbacks. In the rest of this article, I’ll provide some suggestions of what those drawbacks might be, how they can turn a nifty coherent cluster of interacting droplets into a batch of damp floor, and how to protect yourself from such problems.

    The first thing to keep in mind is that The Cloud is still in its early days, particularly compared with the Internet as a whole. Thus, it’s still being refined and hasn’t yet reached the same mature state of reliability as the Internet. A related problem is that there is no one “The Cloud”; rather, it’s a large collection of related services, with some overlaps and many non-overlapping areas, and everyone seems to define and implement it at least slightly differently. This is why DropBox, for instance, retains remarkable availability and security. In contrast, Apple’s ongoing availability problems with its iCloud service are a good example of why this is problematic: if you can’t rely on a Cloud-based service... well, you can’t rely on it. Duh! It’s become something of a truism that it generally takes three tries to get a design right, and only the oldest cloud services are working on their third full iteration.

    The subject of availability leads us to the important concept of a guarantee of service: a key service must be available when you need it, else it’s useless. This is particularly true when the cloud is used to provide software as a service, as in the case of Microsoft’s Office 365, which provides access to software such as Word via your Web browser. Microsoft has done this right in many ways: availability has thus far been pretty good, and if the service is down, you can keep working from a copy of Office installed on your computer. (If you're using their OneDrive service, you'll have access to all of your documents both via the online service and via your computer; they're kept in synch.) This is crucial for someone like me who spends five days a week earning their living using Word. The flip side is that if your computer dies, you can move to another device (another computer, but also increasingly a tablet like an iPad) and pick up where you left off. This is similar to the IMAP e-mail approach, in which your messages are stored on your service provider’s computers, but you can download a copy of the messages to deal with when you’re not connected to their computers.

    Immaturity of the technology also means that security is an issue, and an increasingly important one. Like any version 1.0 or 2.0 product, The Cloud still has some holes. In the old pre-Cloud world, someone who wanted to break into your data only had one point of access: the one device that stored all of your data. With the cloud, your data may be stored across dozens or even hundreds of computers, each of which represents a point of failure. When a security problem is discovered, managers of a service typically “roll out” the fix on only a few computers initially to ensure that the fix isn’t worse than the original problem. Until they’re satisfied the fix works and they can install it safely on the other components of their part of The Cloud, the other parts remain vulnerable. This is particularly problematic because even though each implementation of The Cloud is somewhat different from all others, all implementations rely on certain shared protocols that let the different services work together. This can lead to widespread security problems when one of those shared protocols is compromised. Unfortunately, when you depend on a Cloud service, you also depend on its providers aggressively testing for such problems and responding rapidly when problems are revealed. When no one company is responsible for maintenance of something as important as one of the underlying protocols, it can take some time for problems to be detected and fixed.

    The Cloud is a great idea, and I use it judiciously as part of my business and personal computing strategies. But I don’t uncritically accept the hype. To account for the problems, I protect anything important in several ways:

  • I maintain security on my own computer (good antivirus software). And I skim several newsletters to be sure I’ll learn when a serious security problem has been discovered so I can take appropriate countermeasures (e.g., not use a compromised service or insecure software until the problem is fixed).

  • I back up all my data offline (on DVDs), near-line (in a hard drive connected to my computer), and online (via DropBox). If any one source is compromised, my data is safe on the other sources.

  • For the few things that are so important I need additional security, I encrypt the data. If someone should break into (say) DropBox and gain access to my data, they’ll have to break the encryption before they can use the data.

  • I rely primarily on software on my own computer, but have an old backup computer I can switch to if the main computer dies. I’m looking into Office 365 and iPad-based editing, but haven’t yet made this an integral part of my strategy.

  • Distrust any cloud service that doesn’t let you take similar steps to protect yourself.
    blatherskite: (Default)
    Subtitle: Of Sheep and Men

    “[Harold’s] that most dangerous of animals, a clever sheep. He's the ring-leader.”—Eric Idle, Monty Python’s Flying Circus

    “All animals are equal, but some animals are more equal than others.”—George Orwell

    In the world of Aardman Animations, the life of a sheep is not an easy one: torn from bed at sunrise of each day, fed nothing but a few scraps of corn, marched off to the paddock under guard by a snarling dog, locked in a drafty barn again at the end of the day -- and occasionally sheared for the wool on your back, with no compensation for your labor. It’s exactly the kind of “boot stamping on a human face -- forever" world that Orwell imagined in his nightmares. This is hardly surprising, as Shaun the Sheep comes to us from the studio that brought you Chicken Run, a Swedish-cinema-noir-bleak study of man’s inhumanity to man, with the lecture delivered by using The Great Escape to draw parallels between the fate of innocent chickens destined for the meat pie factory with that of men imprisoned by the Germans during World War II. This is not your grandfather’s children’s movie.


    Just kidding.

    As anyone who’s ever seen one of the Wallace and Grommit shows knows, Aardman Animations has a unique gift for telling gentle, funny, heartwarming stories that are as much a pleasure for adults as for kids. You tend to leave the cinema with a big-ass grin on your face, and Shaun is no exception.

    Plot synopsis: Shaun, our protagonist, has grown bored with the daily grind, and being that most dangerous of animals, a clever sheep, decides to break the mold. With the help of his woolly partners in crime, he tricks the farmer into falling asleep on the job (by having the herd run past his eyes, then behind his back repeatedly, so that the poor farmer finds himself counting sheep* ad infinitum). Once the farmer’s snoring, they place him in the cot of his camping trailer, draw the window shades, and enter the house for a day of recreation, planning to make popcorn and pizza, drink martinis (made from a bouquet of flowers), and watch videos on TV. Unfortunately, the trailer hasn’t been properly secured, and rolls downhill into The Big City, bearing the unwitting farmer. Plot complications ensue, starting with the farmer’s three pigs taking over the house (“While the sheep’s away, the pigs will play”?) and really getting going when the trailer comes to a halt, followed by the farmer emerging and getting bonked on the head, thereby losing his memory and ending up in the hospital. When freedom loses some of its attraction for the sheep, they sneak into town to mount a rescue operation.

    * The level of background detail is phenomenal, since Aardman really pays attention to the minutiae that give a story three-dimensionality and a sense of being real. In addition to the “counting sheep to fall asleep” joke and the fact that little pigs come inevitably in threes, there are dozens of small visual or other jokes along the way. These include the four sheep, camouflaged as humans using stolen clothing, crossing the street in an hommage to the iconic Abbey Road Beatles album image; a signboard for The Big City that lists its sister cities as La Grande Ville, Grossestadt, and Gran Ciudad; a poke or three at the fashion-conscious and trendy; the sheep, being sheep, not knowing human social conventions for restaurants and playing the innocents abroad by emulating the behavior of those around them; a hilarious poke at prison films (including the scene in every cowboy movie ever in which someone busts the hero or villain out of jail); a QR code that went by too fast to capture but that turns out to be an easter egg; and the very-meta road sign labeled “Convenient Quarry” which leads up to the climatic and terrifying (at least, for a couple 5-year-olds in the audience) confrontation with the villain of the story. My favorite was the “baabaashop quintet” pun. For a possibly complete list, see Luisa Mellor's list on the Den of Geek! site. Honestly, how do people spot all these things?

    It’s all good spirited fun, with nobody getting seriously hurt**, no foul language (other than some fowl language from the rooster), clever animation that uses facial expressions and other clues rather than actual words to convey almost all of the dialogue, and a sheer generosity of spirit that will leave you grinning like a fool. Do stay to the end of the credits for yet another easter egg.

    ** However, in a sinister but possibly unintended touch, there didn’t seem to be any statement that “no animals were harmed during the production of this film”, despite a solicitious note that brain injuries such as the one suffered by the farmer are potentially very serious, accompanied by a link to the Headway Brain Injury Association Web site.

    For the official trailer and several other goodies, visit the official Shaun the Sheep Web site.
    blatherskite: (Default)
    Back when I worked for a single someone else instead of 200+ someone elses, my boss used to come to me periodically and ask me to cut a manuscript's length by 50% or more. I rarely had much trouble doing it, even for reasonably good writers who weren't egregiously verbose. Apart from my having a ton of practice applying this skill, it helps that English is highly redundant; our language contains a surprising amount of built-in error-proofing to ensure clear communication.

    But I also have a gift of seeing what’s important and what isn’t. Thus, I’ve often told my authors that it’s possible to tell any story in 50 words or fewer, and when they don’t believe me, I show them. For example, how would I describe “economics” in 50 or fewer words? Here are two thoughts:

  • Cranky mode: “A collection of logical fallacies that stem from the erroneous assumption that Homo economics is common and that markets are fair.” (21 words)

  • Respectful mode: “Sometimes-profound insights into how and why humans make resource-allocation decisions.” (12 words)

  • (Pause to admire how the respectful mode is... ahem... more economical of words.)

    Both could be shortened further using the various tips I’ll present in the rest of this essay. How about something really complex, like (say) genetics? How about: “Cellular computer programs that define how organisms grow, develop, metabolize, reproduce, and pass those programs to their offspring.” Relativity? "The laws that govern time and motion vary as a function of velocity; time, mass, and dimensions behave differently as we approach the velocity of light." And so on.

    In these examples, the key lies in finding the key points and eliminating anything that’s not required to convey those key points. It also helps if you accept the principle that you can’t say everything or provide full details, and shouldn't try. The goal of concision is to convey the essence. Completely explaining any interesting concept takes space, and the more complex the concept, the more space it takes. Consider, for example, that economics, genetics, and physics each require 500-page textbooks just to cover the basis of each discipline. Many of the basics spawn their own textbooks, and so on for subsets of each of those basic points.

    Since this essay scrolls relentlessly past the bottom of your screen, you’ve undoubtedly noticed a certain irony: this essay isn’t particularly short. In my defence, I spend my whole week practicing concision; my weekend essays are the textual equivalent of putting on stained sweat pants and a torn t-shirt, swilling beer in a lawn chair, and chatting with a friend. If you’re from the TL;DR (“too long, didn’t read”) generation, and have miraculously read this far, here’s the short version: “Concision’s easy: eliminate the unimportant stuff.” If you’ll allow me a few more words: “You can do it too, with practice.” If you’re willing to read on, here’s the (flabby, verbose) version of the essay. If you want to write concisely:

  • Start by identifying the key points. Then identify and eliminate the “merely interesting” points. Retain only the strongest support for the key points. Use imperative statements (as I'm doing here) when you want to tell someone what to do.

  • Start with a strong outline based on the key points. Don’t waste time or space describing the unimportant stuff.

  • Eliminate repetition. (Deredundantize!) The "rule" that you should “tell them what you will say, say it, then remind them of what you said” works better in oral presentations than manuscripts.

  • Establish the context once, then repeat it only when a detour or digression changes the context and you need to re-stablish the original context.

  • Ruthlessly eliminate adjectives and adverbs.

  • Replace compound verbs and verb phrases with precise, strong verbs: write in a way that confuses the real point = obfuscate. (Most style guides have long lists of verbose phrases and their shorter equivalents. Study them.)

  • Speaking of obfuscation, don’t circumlocute: get to the point.

  • Replace compound words or phrases with precise single words: blog post = essay, pale red = pink, evil man = villain.

  • Watch for implicit redundancies, particularly in clichés and stock phrases: temporary reprieve = reprieve, unfilled vacancy = vacancy, unexpected surprise = surprise.

  • Use metaphors or key words, such as Homo economicus in my definition of economics, that speak volumes to those who understand the lingo.

  • Use possessives, even for inanimate things: the point of this essay = this essay's point = my point.

  • Use pronouns or acronyms judiciously: once you’ve established that the National Aeronautics and Space Administration is NASA, use NASA thereafter. Multi-word phrases such as “our committee” can be replaced with shorter pronouns such as “we” or “us” when the context is clear.

  • Eliminate (1) numbers and (b) letters used to enumerate short phrases; they’re rarely helpful. Turn longer phrases into a bulleted list, particularly if the sentence that introduces the list lets you eliminate one or more recurring words: “Our goals are to: [list]” rather than “Our goals are to..., to..., and to ...”. If you feel the need to use words such as first, second, and third, use a numbered list and eliminate those words.

  • Limit yourself to one strong example; provide two or three only for complex topics with qualitatively different cases or sub-cases.

  • Cite or link to resources external to the text to provide details.

  • Combine sentences by eliminating overlapping elements: “This essay provides many examples of concision. These illustrative examples show...” = “This essay provides many examples of concision that show...”

  • Eliminate the least important parentheticals. (These are words between parentheses, like this sentence, or between commas, like this phrase, that only embellish.)

  • Replace negatives with positives: not alive = dead, not wrong = right.

  • Let the manuscript sit for a day before you revise it. Examine every word under the editorial microscope to see whether it’s crucial or merely “useful” and whether its role might already be served by another word.

  • Get a Twitter account and learn to use it. A 140-character limit focuses the concentration most wondrously. (Try not to cheat by breaking longer messages into two or more parts.)

  • Of course, you can be too concise, particularly when you’re writing fiction and the goal is to wallow in the sheer joy of words. Leo Rosten’s famous joke about “fresh fish sold here daily” illustrates the problem with excess concision: Obviously, sold is redundant; the fish aren’t an art display. Similarly, here: where else would they be sold? Lose daily; if they’re not sold daily, they wouldn’t be fresh. Lose fresh; nobody would buy stinky old fish, and you're not dumb enough to try selling them. The remainder, fish, is also useless; these aren’t dogs or computers. Just display the fish in your window, and everyone will figure out why they’re there without all that redundant verbiage that makes English such a powerful tool and so much fun to play with.
    blatherskite: (Default)
    I’ve been blessed (if you’re me) or cursed (if you’ve been forced to listen to me) with insatiable curiosity and a profound sense of wonder at the universe. Pretty much anywhere I look, I can find something in the natural world to fascinate me. And sometimes my brain flits around from notion to notion like a butterfly with ADHD. Over the years, I’ve accumulated an enormously wide, though often shallow, appreciation for a great many things.

    As the years go by, I’ve tended to oscillate between my scientific training (wanting to name and pigeonhole everything) and simply appreciating things for their own sake, without having to apply a label that fixes them in intellectual formaldehyde. Labels are tremendously useful things; they help us define how things fit together, and knowing how the many parts of the world fit together provides a much more profound understanding of its wonders. But labels also strongly predetermine how we think of things, which can prevent us from seeing beyond the narrow walls of the mental pigeonholes we’ve built to contain them. More importantly, sometimes it’s nice to just enjoy something without having to think of its larger implications.

    I also love reading, witness the overflowing bookshelves in our house.* I’m of the opinion that pretty much anything I read will teach me something new or inspire thoughts completely unrelated to what I’m reading (see above re. ADHD) but that are nonetheless interesting. For example, today, for no reason I can discern, I found myself wondering why police require guns and clubs to subdue potentially violent but not necessarily dangerous citizens. The Romans had a simple and elegant solution: use nets, like those used by the type of gladiator known as a retiarius. For the most part, a skillfully used net should be completely harmless to the citizen, and would be inexpensive enough that every patrol car could have one in the trunk. Heck, officers on foot could probably carry a couple on their belt. Hmmm...

    * Were Shoshanna not equally voracious in her reading habits, this would be a serious problem. Fortunately, we’re highly compatible in this way. Even so, we both made the supreme sacrifice a few years back of weeding out some of our duplicate books and donating them to the staff of a convention.

    During the Iceland trip that I described in the past couple weeks of blog entries, I was talking to our guide, retired geologist Richard Little of Earth View tours about geology and legends. (Richard is an excellent tour guide and organizer, by the way. If you love geology and visiting exotic places, Richard’s a great choice.) Our discussion prompted a memory of a book I’d read more than 30 years ago, Ragnarok: the Age of Fire and Gravel, by the 19th century U.S. congressman, early litcrit guy, and amateur scientist–author (many now say pseudoscientist) Ignatius Donnelly. (This dabbling across multiple disciplines was a common Victorian thing, and it produced both interesting insights and arrant nonsense. So does modern intellectual endeavor, though usually less often.)

    Donnelly wrote this book in an effort to scientifically explain the curious coincidence of how certain geological evidence strongly suggested a large cometary impact that strewed similar types of rock and gravel around the globe. Unfortunately, Donnelly was writing well before glaciation was fully understood and before plate tectonics was being seriously considered by geologists (i.e., before Wegener began musing about continental drift in the early 20th century). Plate tectonics does a far better job of explaining the evidence. But what Donnelly got right (as subsequently confirmed by substantial geological evidence) is the fact that all kinds of large rocks and possibly even comets periodically strike the Earth, and that people who were alive at the time would have seen the larger impacts and tried to incorporate them in their body of myth.

    It's been 30+ years since I read Donnelly’s book, but my memory is that it's a fascinating example of 19th century amateur scientific sleuthing and did a plausible job of explaining the available geological data. I remember the writing as charmingly antique (i.e., that Victorian style thing) and I remember devouring the book in only a few days. Donnelly turned out to be wrong because, of course, his knowledge was incomplete and, like many amateur scientists, he perhaps was unaware of how much data real scientists amass in their efforts to understand. But apart from the lesson in the history of science, what fascinated me about Donnelly’s book was that he took the second part of this idea and ran with it: the idea of how scientific phenomena can be incorporated in a culture’s myths. Thus, Ragnarök represents one of the early efforts to subject myth and legend to a scientific test to see whether there was a plausible scientific explanation for the myth. Here, Donnelly was specifically investigating the Icelandic/Norse Ragnarök myth): a large comet striking Earth would almost certainly carve a fiery trail through the atmosphere (the fire part of Ragnarok), leave a trail of debris and signs of an immense impact (the geological evidence Donnelly mustered), and create a mini-ice age if it threw up enough dust (the ice part of Ragnarok). Donnelly provides examples from several other cultures to support his hypothesis.

    This notion blew my young mind: the mere idea that disciplines as different as science and history could be combined in highly productive ways that took advantage of the different strengths of these different ways of thinking was taken as a given by the Victorians, but this kind of interdisciplinary cross-pollination has subsequently fallen out of favor. As a result, and a sad one at that, it isn’t being done nearly as often as it could be: professionals in various disciplines tend to work in their own isolated silos rather than working together to share their expertise. Whatever else one might say about Donnelly, he provides an example of several things: that amateurs can enrich our way of seeing the world, even when they’re wrong; that none of us can master all subjects, and that the amateur’s desire to understand multiple disciplines is best achieved by collaborations between professionals in these disciplines; and that (for me) understanding why an author’s thesis was right, wrong, or somewhere in between is itself a source of inspiration.

    Another example of this hybrid scientific/historical approach to exploring deep history is the notion that the great flood of the Judeo-Christian Bible represents an oral history of the prehistoric flooding of the Mediterranean basin that occurred when the land between Gibraltar and Africa was eroded away, allowing the Atlantic to flood into the basin. Unfortunately, the flood timing doesn't seem to support this possibility; that flooding is estimated to have occurred more than 5 million years ago, well before modern humans evolved (ca. 300 kaBP for Neandertals). The flooding of the Black and Caspian seas, between 16 kaBP and 7 kaBP, is a more likely candidate for the source of this myth. Of course, the fossil evidence is also incomplete and fragmentary, so its possible that the Neandertals originated much earlier than 300 kaBP and that even older branches of the human lineage were much smarter than we currently believe and could have been around and verbal by the time of the flood. I’m not convinced, but it’s fun to play with such notions. Julian May’s Pliocene Exile series has a ton of implausible fun with these notions. So even if the science and history are suspect, it can still lead to some fun ideas.

    The point I’m trying to make in this essay relates to the excitement provided by new sights, new ideas, and new connections among previously unconnected facts. In a sense, it’s less important whether the idea is correct than that it’s exciting. There’s always time to explore the idea using whatever tools you prefer (science, psychology, culture, whatever) and find out whether it’s plausible; it’s the exploration that’s important. The world’s a fascinating place, and sometimes idle speculation leads to even more fascinating insights, as in the case of Wegener following the chain of inspiration provided by suspiciously similar continental boundaries and inspiring a whole new field of geology (plate tectonics). Sometimes the exploration seems futile, as in the case of the Mediterranean floods, but can still result in good stuff (fiction in this case). The journey is as important as the destination, as is true in so many areas of life.

    When I come across something that strikes me as cool, I want to share it with everyone I can trap into listening so they can share some of my sense of wonder and excitement. That’s a major reason why I write so much nonfiction, particularly related to writing and editing. I want other editors and writers to benefit from what I've learned. It's also why I blog about my vacations and take hundreds of photos: when I get home, I share the collection with anyone who expresses sincere interest so they can share some of my sense of wonder. (Shoshanna usually boils them down into a much shorter collection so as not to bore those who express only polite interest.) It’s my way of making the world a more wonderful place, one thought at a time.


    blatherskite: (Default)


    RSS Atom

    Expand Cut Tags

    No cut tags