blatherskite: (Default)
Scientists and technologists have good intentions in spades, but sometimes you wonder if they ever leave the house and mingle with real people. Take, for example, two well-intentioned but doomed ab initio efforts to put some ethics back into a particular branch of technological endeavor, namely the development of artificially intelligent robots. A brief definitional note before we get going: “Intelligence” is a slippery term to define, and in practice, the definition usually comes down to “whatever standard I can evoke that will make me seem more intelligent than you or allow me to treat you as a lesser being”. For artificial intelligence, the standard definition relies on the Turing test, which (in greatly simplified terms) states that something is “intelligent” if one cannot distinguish it from a real human. With the footnote that intelligence is multidimensional, not something that can be gauged with a single evaluation or a single evaluation metric, this test remains a broadly useful criterion, and one that I will adopt. In short, we can summarize this test as “a difference that makes no difference is no difference”.

The first problematic initiative aims to eliminate the use of artificially intelligent robots in military contexts. Even if you don’t believe that the Terminator franchise represents the inevitable endpoint of research on this technology, you have to admit that the Future of Life Institute makes a compelling case for why we should not go down this particular dark road. To me, the most compelling reason is that replacing human warriors with technological surrogates seems to eliminate the human cost of warfare and thereby makes war seem insufficiently horrible to make prevention a priority.

In practice, this is only true for the aggressor, and then only if they can remain a safe distance from the chaos. We’ve already seeing how shortsighted this perspective is in the high ”collateral damage” associated with the use of advanced military technologies, most recently in the form of remotely operated drones. This damage should not have been at all surprising given the spectacular failures of previous “this will solve everything” technologies such as precision bombing to eliminate or reduce civilian casualties, but we humans are nothing if not expert at ignoring inconvenient realities (cf. the abovementioned “in practice” definition of intelligence).

In reality, civilian casualties are inevitable in modern warfare, and have increased greatly over the last few millennia (in absolute numbers, if not proportionally). The problem is that conflicts rarely occur in neatly delineated killing fields, like sports stadiums located far from civilians. It’s simply not credible to propose that modern warfare will only be fought in carefully sequestered arenas where the combat can be kept far away from civilians. Pretending that artificially intelligent robots would solve the problem is nothing more than a layer of abstraction, intended solely to make the unpalatable palatable by hiding its ugly reality. The terminology itself illustrates the problem: instead of the accurate phrase “death of non-combatants”, or the simpler “murder of innocent civilians”, “collateral damage” only serves the goal of abstracting human tragedy so that we can ignore its ethical consequences.

Eliminating the use of artificially intelligent robots in warfare therefore has much to recommend it. Yet there are two problems. First and most serious, those who make the decision to declare war on others rarely, if ever, experience the consequences personally. As a result, they have no incentive to avoid declaring war because someone else will pay the price for them. Eliminating robots from the equation does nothing to solve the problem. Second, the history of technology is the history of finding ways to convert even the most seemingly innocuous technology into a means of killing or wounding other people, and the history of warfare is the history of conflicts escaping nice, tidy boundaries.

Warfare is only a specific form of the violence we humans seem to do instinctively, and it has deep roots in all cultures and all historical periods. It’s not something we’re going to abandon or confine to killing fields that will spare civilians, no matter who or what does the fighting. Hence the sarcastic and deeply pessimistic title of this essay, “good luck with that”.

The second initiative aims to eliminate the use of artificially intelligent robots in sexual contexts, and specifically to eliminate “sexbots” -- robots designed primarily or solely for use as sexual surrogates. This one’s a little harder to understand, at first glance: such devices could eliminate the spread of sexually transmitted diseases, provide companionship and possibly emotional instruction to people who may not be able to sustain a healthy relationship on their own, and greatly reduce (though probably not eliminate*) sexual slavery or the abuse of adults and children. Yet as in the case of warfare, adding a layer of abstraction to something as fundamentally human as our sexuality lets us avoid dealing with the real problem. In addition, there’s considerable evidence that humans (at least a small proportion thereof) will copulate with just about anything that moves, and many things that don’t; this second initiative will face a hard time combating that urge. This leads to the “good luck with that” conclusion for this initiative too.

* Sexual abuse is not always about sex; often it’s about power over the weak, or sadism, or other unpleasant aberrations of human psychology.

Another concern, raised by SF writer Elizabeth Bear in her chilling short story, Dolly (about the abuse of a sexbot and its consequences), is the intelligence part of artifical intelligence. Whether in matters of warfare or sexuality, it’s hard to imagine that it would really be more ethical to shift abuse from our fellow organic beings to non-organic but otherwise intelligent beings and rationalize this abuse as acceptable. “Intelligence” is relative, not an absolute and binary scale that provides nice distinctions. If you accept that proposition, the possession of intelligence should entitle any intelligent being to the same protections we would grant ourselves, including protection from sexual abuse. Not everyone accepts this as being valid; to some, there is a unique spark (let’s call it a “soul”) that makes humans qualitatively different from anything else, no matter how intelligent. Yet even if we accept their distinction as valid, the long and horrible history of torture suggests that “good luck with that” is again the correct response to any suggestion that we ban such behavior.

So should we throw up our hands in despair and ignore these issues? The story of King Canute is often misrepresented as an example of human arrogance. In the incorrect version of the tale, a powerful but arrogant king attempts to turn back the tide and fails. This failure has spawned the idiomatic phrase “attempting to stem (halt) the tide”, with the implicit meaning of a doomed fight*. Yet men and women of good conscience should attempt to stem the tide, even if their struggle seems doomed. Unlike Canute, we have some hope of stemming the future tide of misuse of artificially intelligent robots, at least for most nations and for some time.

* In the original version of this tale, the King’s goal was to demonstrate the importance of humility to his courtiers: some things cannot be stopped by even the most powerful humans, good or bad intentions notwithstanding. That's the wrong message for the sake of this essay.

As proof of what is possible, I offer the example of the 1925 Geneva Protocol, an early attempt to limit the use of chemical and bacteriological weapons in warfare. Though the protocol has by no means eliminated the use of such weapons, the contrast with the use of chemical weapons (toxic gases) during World War I and earlier uses of smallpox-contaminated blankets in an effort to eradicate tribes of Native Americans is dramatic; rather than toxic gases and microbes becoming a standard part of the military toolkit, the use of such tools remains the exception, and one that attracts horror and often reprisals from the international community. People still die, often horribly, during warfare, but the conventions have greatly reduced the frequency of two horrible ways to die. The non-use of nuclear weapons since the end of World War II is another promising sign, though recent events in Iran and North Korea give me cause for hesitation.

As a cynic, I don’t think we’ll suddenly evolve sufficiently ethical behavior on a global scale to win this fight. Thus, I see no plausible way to avoid the creation of warrior robots and sexbots. But the successes in limiting other abuses makes the fight no less worth fighting. We may not be able to stop either form of abuse, but we may at least limit its scope. “Good luck with that” is not an acceptable response when so many lives, whether natural or artificial, will be affected.
blatherskite: (Default)
Just finished Cherie Priest's Maplecroft: the Borden Dispatches, and like the other examples of her writing that I've read, I can recommend this one highly.

Maplecroft is a carefully researched "what if?" about the historical figure of Lizzie Borden ("Lizzie Borden took an axe, gave her mother forty whacks..."), crossbred with a Lovecraftian "bad things happen to good, bad, and indifferent people because the universe at best ignores us and at worst, actively hates us". The basic premise is that Lizzie wasn't a crazed murderer, but rather someone who fell into Lovecraft's world and was forced to defend herself and her loved ones as best she could, with wholly inadequate tools. It's far more restrained linguistically than Lovecraft, and (for obvious reasons) not misogynistic, and therefore it's more deeply affecting. The story is told as an epistolary (i.e., via letters and journal entries), which proves to be a very effective way of introducing many POV characters who don't always understand what the other characters are doing or thinking. Priest combines the best of first-person narration with unreliable narrators, and does so masterfully.

Lizzie, though our main protagonist, is accompanied by several other key viewpoint characters. Like a late-Victorian Buffy the Vampire Slayer with her "Scoobie gang", Lizzie courageously fights the forces of darkness that have chosen to destroy her family, while simultaneously dealing with the "mundane" and in many ways equally horrible ravages of "consumption" (her sister's losing fight with tuberculosis)*. Like Buffy, she and other characters make many well-intended mistakes (some tragic) that have profound consequences. I won't spoil things by telling you how the story turns out, but it's a deeply human tale of a struggle against impossible odds and incomprehensible forces. As in the best Lovecraft, there are costs and consequences for everyone who gets drawn into the darkness. Nobody escapes completely intact, no matter their intellect or virtue.

* A very interesting parallel if you want to go all lit-crit.

One non-spoiler false note: Because the 1890s are a key period during which the scientific enlightenment really got rolling good and hard, several protagonists try to explain what's happening to them in scientific terms, even as they learn that this worldview doesn't match their increasingly Lovecraftian world very well*. This is fine so far as it goes; we humans use our mental models of how things work to understand our world, and the scientific worldview was a key mental model at this time. Where this goes astray is when Priest enlists it as a valid mechanism for dealing with the inexplicable and fighting the unfightable. To me, it would have been more effective to leave the inexplicable unexplained and show how the mental model failed; when you cling to a reassuring belief (here, that anything can be understood through the application of logic and science) while the world falls apart around you, the horror is compounded when that belief proves false. This authorial choice doesn't in any way ruin the book, but it diminished some of its punch towards the end.

* Brian Lumley wrote a bunch of stories in this vein. They're enjoyable works on their own terms, and a nicely executed response to Lovecraft (i.e., humans *can* fight successfully against madness and a hostile universe through rationalism and technology or technologized magic), but as a result, I felt they lost some of their punch. Charles Stross strikes me as doing a better job of mashing up science and Lovecraft, particularly in the deeply chilling A Colder War.
blatherskite: (Default)
Just finished reading "Corporate Espionage", by former NSA analyst and current "white hat" hacker Ira Winkler. It's about the many ways both hackers (those who penetrate computers for the fun of it and bragging rights) and crackers (those who penetrate computers for malicious purposes) sneak into companies and extract potentially billions of dollars of proprietary information -- and in the case of banks, sometimes literal millions of dollars.

But it's about much more than that: it's a detailed treatise on how spies of all sorts sneak into (penetrate) companies by exploiting vulnerabilities. And the most serious vulnerabilities are almost inevitably human, not technological, though some of the technological vulnerabilities have human help in remaining vulnerabilities. Understanding the way people work and respond to both co-workers and other people lets hackers and crackers use "social engineering" techniques to gain access to areas where they don't belong and escape with astonishing amounts of information.

The book was written in 1997, so it's a bit out of date in some areas (e.g., Winkler discusses modems as a major point of vulnerability), but the basic principles remain valid (now it's cable modems or routers that are key points of vulnerability). It's also a fascinating updating of Bruce Sterling's "The Hacker Crackdown" (1992), but written by someone who lives the life rather than by a journalist. (No diss at all intended for Sterling, who really did his homework.)

What's really disturbing is how little has changed in the 20-some years since these books were published. Although Winkler doesn't provide hard or verifiable (i.e., referenced) data in most cases, billions of dollars were being lost annually even back in the 1990s, and the losses have probably grown by at least an order of magnitude since. Anyone who doubts this should contemplate the recent rash of penetrations of U.S. government computers, which have full-time and highly motivated security staffs protecting them; Edward Snowden; the recent antics of Chinese government-sponsored crackers; and the whole "Anonymous" movement.

What's even more disturbing is that we're currently in a "cold war" situation, with most of the hacking and cracking being done by amateurs or by professionals with very limited goals (e.g., stealing specific trade secrets). One can only imagine what would happen if a true cyberwar erupts.

And imagination is why I'm sharing this review here. Winkler's book is a great resource for writers if your only prior experience with cracking comes from Hollywood, which rarely gets any of the details right. (I've just started watching "Mr. Robot", which looks to be that rara avis -- something where the writers actually understand what they're writing about. Thus far, it looks excellent.) Winkler gets the key details right, and in a very disturbing way. But he's not just a fear-monger. He concludes the book with a long list of advice on how companies and governments could be doing better to protect their -- and our -- data.

Highly recommended source material if you want to write about cracking and cyberwar. Or if you just want to suggest the need to improve your employer's protection by anonymously leaving a copy of this book on the president's or CEO's desk.
blatherskite: (Default)
One of the things you notice (at least if you're paying attention) is how life falls into certain rhythms. The daily cycle from waking to sleeping is most obvious, and the annual cycle is most obvious in the turning of the seasons. But whether or not you've been paying attention, these and many other rhythms affect your work life, and that, in turn affects your "real" life outside of work. Rather than fighting these patterns, it's wiser to find out how to "go with the flow" and use them to your advantage.

For example, I have a very clear daily pattern. I usually have a mug of half-caffeinated coffee with breakfast, then once it's kickstarted my brain enough for me to be recognizably sentient, I go check e-mail, reply to the simple messages, and generally get my day's tasks sorted out. Then I indulge in a second mug of coffee to bring me up to full mental speed before I begin my real daily work. A single mug of full-caffeine coffee right at the start would arguably be more efficient, but I enjoy coffee for its own sake, not just as a performance-enhancing drug.

While my brain is coming up to speed, I focus on doing some of the more mechanical editorial tasks that don't require full sentience. These are things like responding to more challenging e-mails that actually require some thought and checking the literature citations and References section in the day's manuscript. Once I'm fully up to speed, I dive into the challenging work of figuring out what my author is trying to say and finding ways to help them say it. Mid-day, I'll go out for a walk to do any errands that need doing. Towards the end of the day, as my ability to concentrate wanes, I'll leave the computer and do some stretching exercises for half an hour -- kind of a moving meditation, without being anything as sophisticated as actual yoga or tai chi. Refreshed, I return to finish any remaining work, and when that's done, shut down the computer, go do aerobics or weights, and finish the day with Madame.

Understanding this rhythm in how my body works lets me match the nature of the work to the amount of sentience available for me to allocate to that work. During pre-sentient periods while I wait for the coffee to kick in, I get a lot of work done that doesn't require much in the way of brainpower; once the coffee is working, I focus on the work that requires focus. It would be a waste of time and effort to try accomplishing the really demanding stuff while my brain isn't up to the task, and a more serious waste of time doing low-brainpower work while my brain is working at peak efficiency. Accounting for how my brain and body work makes me far more efficient and effective than I would be if I tried to fight those rhythms.

Annual rhythms are more complex. Most of my editing clients are researchers, and pretty much all of them live in the northern hemisphere. So their work schedules are affected both by the same annual turn of the seasons I experience and (for university researchers) by the ebb and flow of the northern hemisphere school year. This pattern is further complicated by whether they work primarily in the lab (including on the computer or in the library) or in "the field" (i.e., outdoors somewhere).

Lab scientists are only weakly affected by the turn of the seasons. Instead, they are strongly affected by things such as the annual funding cycle. For example, if they've budgeted a certain amount of money for editing and publication of their research papers, they need to spend that money before the end of the fiscal year, and that annual budgetary period creates deadlines for their writing. My government authors tend to have a 1 April* start to their fiscal year, so I know they'll be doing their best to spend their remaining budget in February and March; that means they send me a ton of work at this time. Then there's a lull as they pause to catch their collective breath and resume the cycle. If they work at a university rather than a government or private institute, they also tend to try to finish their work before school starts (August and January) or after it ends (December and May) so that they aren't being distracted by their teaching requirements or the demands of their students.

* The irony of government budgets being determined by April Fool's Day does not escape me.

Field scientists are also constrained by the school year if they work for a university, but more importantly, are governed by the seasons. Because my work relates primarily to environmental and ecological subjects, they need to work during the time when their study subjects are alive and growing or moving around. Having done some field research myself, I'm also keenly aware that it's more fun being out in the field during clement summer weather than at -30C in the winter, and scientists being human, they tend to schedule their research for the summer even if it could (in theory) also be done during the winter. So summer is usually a lull period for them from a writing perspective, but they get quite busy once they return home in the fall (September onwards), with computers full of data to analyze. They also get quite busy in the month or two before they leave to begin the new season's field research -- peer reviews of a manuscript typically take months, so it's efficient to schedule those reviews while they're away from the office -- so March and April also tend to be quite busy.

Over time, I've learned that these patterns determine my work load at any given time of year. Knowing the patterns lets me take measures to even out the flow. For example, I send out a warning e-mail a couple months before the typical busy periods to tell everyone that they should reserve my time well in advance, or ideally send me work before the busy period begins. This lets me allocate the available time to each of them who's likely to need it and reduces the number of really long days when I need to work on two manuscripts simultaneously to meet client deadlines. Conversely, before predicted slow periods, I send out an e-mail suggesting that these periods would be a great time to work with me because they won't be competing with everyone else for my time. There are still, inevitably, heavy and light periods, but they're less heavy and less light than they might otherwise be. And I'm less stressed dealing with the heavy periods.

This proactive management of my schedule also lets me do things like arranging vacations during periods when my work load would ordinarily be lowest. That minimizes the amount of income I'd lose by not being available during a busy period, and equally importantly, minimizes the amount of work that arrives in the weeks before I leave and that accumulates while I'm away.

If you're a freelancer, I encourage you to do a similar analysis of your workflow and use the results to better manage your life. If you're an employee, the advice is equally valuable, but you'll have different busy periods; your company's budgeting period may use the calendar year rather than 1 April, the work of your colleagues may be governed by the annual schedule of important trade shows or government grant application periods, and so on. Learning these annual patterns is the first step in finding ways to control your work schedule -- or finding ways to go with the flow rather than fighting it.
blatherskite: (Default)
Just finished editing a paper about embryonic development, in which the authors present a batch of cross-sectional data showing the relative positions of certain embryonic structures (e.g., the heart) at different times during embryonic development. It's creepily fascinating the way things move around; for example, the embryo's heart moves downwards from the neck region, passing through arm structures en route to its final destination in the thoracic cavity.

I was also fascinated to see that the authors didn't seem to have thought beyond the print communication model, which of necessity requires the presentation of static images. But most journals now encourage authors to publish "supplemental information" on their Web site; this is information that would be impossible to publish in the printed version of the journal. Reasons for this impossibility include a requirement for color (which remains very expensive to print), the massive size of a dataset (e.g., large genetics databases), or -- most interesting to me -- information that would benefit greatly from the multimedia capabilities of the Web (i.e., sound and video).

Once every couple months, I find myself encouraging authors to take advantage of this "new" possibility. In the context of the embryo paper, the authors used 3D modeling software to create static anatomical images showing the positions of various structures, which is great as far as it goes. But they didn't consider the possibility of providing the actual models as supplemental material, which would allow readers of the paper to download the models and move through them the way doctors move through CAT and MRI scans to observe the characteristics of a structure in three dimensions. Neither did they use the software to produce an animation that shows how the anatomy evolves progressively during embryonic development.

Such visualizations would be an important tool for helping readers understand both anatomy and its changes over time. Yet the authors didn't think of this! It's a sufficiently important omission that I devoted an entire chapter to this subject in my recent book, Writing for Science Journals.

If you're a communicator (writer, editor, other), it's always worthwhile stepping back for a moment and asking yourself whether you're a little too comfortable inside your particular box, or whether stepping outside that box would reveal powerful additional tools for effective communication.
blatherskite: (Default)
Before diving into the meat of this essay, let me define a few terms related to how software is developed and tested:

Audience analysis is how you begin the development process: you spend some time thinking about how people are likely to use your software, confirm these suspicions with real users (if possible), and you plan accordingly; that is, you design the software to support its users during the common tasks they will be performing with the software. Ideally, you use the some form of Pareto optimization, in which you prioritize the subset of the product's features that are used most often or that provide the maximum benefits to the maximum number of users, then add more features as time and resources permit. I've written extensively on this subject.

Alpha testing is what the programmers do, often with help from a company's "quality assurance" or testing staff, before they consider that a product is ready to be unleashed on its eventual users. In an ideal process, you ensure that all the software features work as required (based on the audience analysis), and fix all the bugs you discover while you're validating the code that you've produced; you might do this yourself, or possibly with a colleague's help, depending on the software development culture at your employer. In practice, programmers and quality assurance staff rarely have time to do either task to their own satisfaction, because software release schedules are driven by marketing, not by the programmer's desire to release a product they can be proud of.

Beta testing is what happens after alpha testing. In this step, you release your best shot at producing a stable, usable product to a subset of its users, most of whom are not employed by your company. This group then tests the software to destruction to make sure it works. The principle is that several hundred people (sometimes thousands) banging away at the program in ways you hadn't anticipated will reveal subtle flaws you didn't notice during your alpha testing, thereby giving you time to fix them before you release the "final" version of the product. (I put "final" in quotations because as any computer user knows, software is never final. There are always subtle bugs that go undetected or that are sufficiently rare a company figures it can wait to fix them. And each new round of patches, fixes, and updates tends to introduce new errors. This isn't professional incompetence; it's inherent to the nature of any complex system, and software is pretty damned complex.) Beta testers receive the software for free during the testing period, and often receive a free license for the final shipping version of the software to compensate them for their efforts.

If all three steps go well, the product that is finally released for sale is stable and usable. How well does Microsoft meet these criteria? In terms of its Windows operating system, amazingly well, particularly given the (literally) more than a billion people who are using one of the various versions of Windows. In terms of Microsoft Office? For the Windows version (WinWord), quite well. I've been profoundly impressed by how well Word has worked with each new version that I've installed: it's become more stable, less buggy, and (often but not always) easier to use. But on the Macintosh side (MacWord)? Not so much. MacWord has been bad enough that when I train editors to use Word for editing, I always start with the recommendation that they use the Windows version, even if they use a Macintosh as their primary computer. WinWord has simply been a better product for the nearly 20 years I've been using Microsoft Word. MacWord 2016 promised to bring the Mac version up to parity with WinWord. Did they meet that promise?

Caveat: I haven't used MacWord 2016 myself, since I'm waiting for the first major round of bug fixes before I install it on my work computer. I'm basing this review on the demos I've seen, a few online reviews, and my wife's experience with the first shipping release of the softare. The TLDR (too long, didn't read) version: My wife, who's been working with word processors for something like 30 years, got so frustrated with Word 2016 that after a couple days of cussing it out, she abandoned it and returned to Word 2008 -- which is also severely flawed, but is at least stable.

In terms of audience analysis, MacWord 2016 is a nice move in the right direction. One of the horrible problems with MacWord has been how radically it differed from WinWord in its interface. Microsoft has always justified this based on the outdated notion that users of the two operating systems have different expectations, an argument that has some merit. However, this argument ignores three key points about real-world use of Word: First, most computer users are now proficient at switching interfaces when (for example) they move from Internet Explorer to Firefox or Chrome. The difference between Windows and the Mac is now largely irrelevant, and is often less disruptive than switching between programs under the same operating system. Second, and much more important, releasing versions of Word with sometimes radically different behavior means that even after users figure out the interface differences, they can't rely on the software to behave identically. Third, and most important in the working world, these two factors create an impossible situation for trainers, who must master at least two different versions of the software (Mac and Windows) and find ways to teach the two groups of users about the differences. At a crude guesstimate, my book on onscreen editing is about 25% longer than it would need to be if the interface and behavior were consistent between versions.

With Word 2016, Microsoft has finally taken some significant steps to make the user interface look more similar between Mac and Windows, but the software still doesn't behave the same on both platforms. A large part of the problem results from a failure in the alpha testing process, which let profound bugs slip through that should never have been revealed to the eventual audience, and by insufficient time allocated to beta testing, which would have given the programmers time to solve those problems.

In terms of alpha testing, Microsoft seems to have fallen on its face -- again -- with MacWord. Many features that they broke when they first released Word 2011, and took months or years to fix, were broken again in the first release of Word 2016 (e.g., not displaying the correct tab of the Ribbon for the current context, a "garbage collection" error that generates the alarming message that there's not enough disk space to save the open file, even if a ton of space is free). This wouldn't be such a problem if the beta testing process had been given enough time to detect such problems and let the programmers fix them. But the current version of Word 2016 is a shipping product: you have to pay money for it. That's entirely inappropriate for a product this buggy.

Bottom line: Microsoft should be ashamed of this performance, and should not expect customers to pay for such a shoddy, unfinished product. My recommendation that professional editors should stick with WinWord still stands. In a month or two, after Microsoft releases the first major service release for MacWord to fix these egregious problems, I'll be willing to risk installing it, and will then report back on whether they've come close to parity with WinWord -- or even produced a usable product.
blatherskite: (Default)
The best humor, as in the case of later books by Terry Pratchett, is both funny and profound. I can't think of any current writer who comes close to Pratchett in combining the two attributes, but Jasper Fforde makes a valiant effort, a fact the latest book in his "Thursday Next" series reminded me of.

I'll skip irrelevant background context and come right to the point: Fforde reminds us of the need to devote a little time each day, even if only a few seconds, to pondering something about the world and to laughing.

Pondering reinstills a sense of the wonder of the world. I get some of that from the scientific perspective in my daily work: the deeper you delve into ecology, the more ramifications and recomplications you discover. In the words of Johnathan Swift, "So nat'ralists observe, a flea / Has smaller fleas that on him prey; / And these have smaller fleas to bite 'em. / And so proceeds Ad infinitum." Or as the Hindu world myth would have it, "it's elephants all the way down". But there are many wonders other than science to be experienced if you pause a moment to ponder; my favorite recent insight was into just how weird it must be to be a house cat, and to be owned by something inexplicable that is close to 20 times one's own size. Imagining what that must be like near to blew my mind. Then there are the daily miracles of a lover's smile and the touch of her hand.

Laughter, of course, has its own rewards, particularly when shared. My favorite recent geek joke was Fforde's throwaway line about a new compression format for jokes, JAPEG*. Sheer brilliance! But humor can be much more profound, as in the case of Québecois comedian Martin Matte**, who recently delivered a funny and touching tribute to his father. My favorite bit was his reflection, driving home from the funeral home with his father's ashes in an urn in the passenger seat, about whether he could legitimately take the commuter lane reserved for cars with two or more passengers. And whether his father would be "burned" if a cop stopped them.

* For the less geeky: a "jape" is a joke, and JPEG is the current standard for compression of photographic images.

** And pause a moment to appreciate the beauty of a world that has an École Nationale de l'Humour in it.

Laughter has the additional virtue that it makes the Forces of Darkness gnash their teeth in frustration. There are days when I think they're winning, but it does my heart good to deny them the satisfaction of making me resent it. There are virtues to a heroic death, but given the low likelihood of such an outcome from a humble editor's life, I'll be happy to die with a laugh on my lips and the sound of grinding teeth in the cosmic background.
blatherskite: (Default)
Writing about artificial intelligence requires one to deal first with the thorny issue of human intelligence*, since it’s helpful to start with an idea of what you’re discussing to provide context and set the evaluation criteria. Getting to a useful working definition is difficult, since it’s tempting to fall into tautology: as the beings who are tasked with the need to come up with a definition, it’s perhaps inevitable that we’ll define it in a way that shows us in the most favorable light. You can see this thought process at work in how ethologists (scientists who study animal behavior) have historically kept moving the goal posts each time some animal is found to be able to accomplish one of their sacred “human-only” skills.

* The story goes that Mahatma Gandhi was once asked what he thought about Western civilization, and that he replied with Shavian wit: “I think it would be a good idea”. Sadly, the same statement might be profitably applied to human intelligence.

With the caveats that all such lists are provisional and that a good science fiction author (or psychologist or ethologist) will be able to propose interesting exceptions to each of the following criteria, here’s my starting point for a list of the key attributes that indicate intelligence:

  • tool use: the ability to go beyond the limits imposed by one’s body by finding or creating suitable tools, whether levers to move rocks or words to move hearts

  • symbol use: the ability to use symbols (words, facial expressions, gestures, whatever) to communicate information

  • abstract thought: the ability to describe a problem in a generalizable way that allows one to solve different but related problems, possibly based on pattern-recognition skills

  • learning: the ability to form and store memories in a way that allows them to be retrieved and compared (note that this also requires pattern recognition skills)

  • a sense of time: the ability to define cause and effect relationships requires an ability to understand what comes first (the cause), what follows (the effect), and the time that elapses between the two (i.e., a short time suggests causality, a longer time conceals causality)

  • goal-seeking behavior: the ability to set a goal and find ways to achieve it rather than merely accepting what the world gives us (note that this assumes an ability and desire to change our environment instead of being forced to change in response to it)

  • self-awareness: the ability to recognize oneself and distinguish between oneself and others

  • emotional awareness: the ability to recognize our responses to a situation (emotions) and deal with them

  • delayed response: the ability to consider the aspects of a problem before acting rather than just responding by reflex (i.e., judgment)


  • One thing that this list leaves implicit but that should be made explicit is the fact that there is both gestalt and synergy at work here: intelligence is both the sum of all these things and something greater than that sum. Simply checking off each item on the list is not sufficient to define someone or something as intelligent. For each of these criteria, humans differ from other animals primarily in the degree to which we can meet such criteria. For example, crows can solve complex problems that would baffle some humans, and cetaceans are arguably even more intelligent; for example, some dolphins have learned to cooperate with human fishermen.

    In addition to these criteria, I would claim that naturalintelligence has three “mechanical” requirements that will lead us directly into a discussion of artificial intelligence. The thoughts that are the hallmark of human intelligence require three things to function: an engine capable of operationalizing the thoughts (i.e., the human brain), fuel capable of driving that engine (i.e., knowledge), and some kind of software that forms relationships (e.g., language, mathematics) and that drives the engine to accomplish something. For artificial intelligence, the engine is a computer, the fuel is data (“big” or otherwise), and the software is (duh!) the software. For both human and non-human intelligences, one might productively argue that a fourth factor is necessary: the ability to compare experiences with others, whether through conversation (humans) or Internet connections (computers).

    With the same caveat that many people will move the goalposts as soon as it looks like a computer might be reaching the same level of these skills as humans, how close are computers to meeting the criteria with which I started this article? Let’s take each point in turn:

  • tool use: Computer-controlled manufacturing (e.g., assembly lines) and mobile robots (whether on Mars or here on Earth) are clear evidence that computers can use tools. They’re not yet capable of creating their own tools, with the limited exceptions described below.

  • symbol use: Codes such as the binary “words” that lie at the base of all computer software are proof of symbol use; more advanced symbol use is demonstrated by the increasingly powerful examples of image-recognition software (e.g., facial analysis, feature extraction) and by assistants such as Apple’s Siri and Microsoft’s Cortana, which can not only recognize simple speech but reply and take actions in response to that speech.

  • abstract thought: Thus far, computers have not achieved what we would typically consider to be abstract thought. But that statement depends heavily on what we consider to be “abstract”. For example, Mathematica can perform remarkable feats of mathematical problem-solving.

  • learning: Neural network software and genetic algorithms can clearly “learn” and preserve that learning, albeit with some assistance from us. Furthermore, there’s no reason (other than a lack of interest in doing so) why programmers have not designed operating systems capable of learning our preferences and adapting to them. We can manually force software to do this through our “preference” settings and control panels, but I want a computer that notices how I manually back up data to a flash drive every hour or so and offers to do this for me. Voice recognition software already learns our unique vocal characteristics, so this kind of adaptation is clearly possible.

  • a sense of time: Software is inherently time-based, and statistical software can detect correlations between events or factors, but the recognition of cause and effect relationships is still some way off.

  • goal-seeking behavior: This is the whole basis of machine learning, so clearly software can seek goals. It can’t yet define its own goals, however; we still tell it what our goals are and command it to meet those goals.

  • self-awareness: A computer’s ability to recognize itself and distinguish between itself and other devices is inherent to such things as the media access control identifier that uniquely identifies a computer’s network card and the IP addresses that underlie the URLs we type into our Web browser. Unfortunately, that’s a primitive talent compared to (for example) the ability to explore the implications of cogito ergo sum.

  • emotional awareness: To the best of our knowledge, we haven’t been able to program emotions into computers, in large part because we define emotions based on complex biochemical reactions that lead to complex neurological responses that haven’t yet been emulated in software. But fields such as the design of facial recognition software are advancing rapidly, and it won’t be long before our computers can recognize when we’re sad or happy based on a glimpse of our face.

  • delayed response: All modern software has the underpinnings of this skill, since the software is generally event-driven (i.e., it waits for something to happen and some criterion to be met) and then chooses how to respond based on a series of hardwired criteria for the appropriate response to any given event. Problem-solving and goal-seeking (optimization) software already exists, and will rapidly become more sophisticated. However, software generally can't improvise in response to events that were not anticipated by its programmer and it may be a very long time before it acquires this ability.


  • Bottom line? All the rudiments are in place for the evolution of a true artificial intelligence. We already have the software equivalents of idiots savants, which are very good at one or a few things and completely hopeless at every other task. Computer scientists are aggressively pursuing the goal of more sophisticated systems, and they’re likely to come up with increasingly sophisticated results. I have no doubt that within my lifetime, they’ll come up with software capable of passing the Turing test.

    But this leads us to the question of whether AI might evolve spontaneously. My take? It’s more likely than one might think. For evolution to occur, several criteria must be met:

  • Evolutionary pressure must be exerted on the organism: If there is no “need” for a group of organisms to change, then evolution is conservative and tends not to cause a change. Survival is the usual “need” that produces change: organisms that survive because they’re adapted to their environment pass on their genes (see the next point); those that fail to survive don’t pass on their genes. For software, the evolutionary pressure is imposed by computer scientists (the blind watchmakers of the software universe), but with the rapid advances being made in genetic algorithms and self-modifying software, it seems likely that setting goals for such software and weeding out software that fails to meet those goals will create enormous evolutionary pressure.

  • The organism must be able to change and retain those changes: In nature, the mechanisms that permit this adaptability and memory are genes. Since the whole point of being able to update and upgrade software is to change and retain those changes, computers are clearly capable of this function. Self-modifying code will take this to the next level.

  • Notwithstanding the previous points, random events are also important. Just as most mutations in the human genome are counterproductive or even fatal, computer programs are unlikely to improve or even continue functioning after experiencing a random change in the code. But it’s not hard to imagine software becoming orders of magnitude more robust than it currently is and becoming able to cope with and even benefit from such glitches.


  • Again, all of the rudiments for evolution are in place. But an evolutionary leap forward to something new and recognizably intelligent won’t happen soon. The current dominant model for software development is “command and control”: a programmer defines the behavior required by their software, and embeds that behavior in stone, and when the software doesn’t behave as desired, it’s debugged and redesigned until it does. But there are signs that we’re moving towards something more interesting, in which the programmer instead defines the goals and constraints and lets the software figure out how to accomplish those goals. When software becomes broadly capable of such feats of insight, we’ll see a true sea change, in Shakespeare’s original sense of “a sea-change into something rich and strange”.

    What we’ll then begin seeing is emergent behavior, as in Dolly, Elizabeth Bear’s brilliant and chilling story.
    blatherskite: (Default)
    Unless you’ve been living in an unusually arid desert the past few years, you’ve undoubtedly heard of cloud computing—or, more simply, “The Cloud”. But what exactly is The Cloud? It’s a nebulous concept, and that makes it hard to pin down precisely what it means. The variety of interpretations doesn’t help. So in this article, I’ll attempt to de-mistify the concept so you can think a bit more clearly about how it works and how you can use it safely.

    The original notion behind the cloud metaphor was that traditional computing was like a pot of water: everything was all together in one place, with all the limitations that this entailed, including the risk of losing all the water if someone knocked the pot off the stove. But imagine, The Cloud’s inventors proposed, if that water were more like the Internet: if you turn up the heat until the water boils, you get a cloud of steam—a bunch of dispersed droplets of water, that nonetheless function as if they were a single entity. At least, they do until a strong wind comes along and disperses them and they can no longer function as a single thing. That’s implicit in the metaphor, and we’ll come back to it presently.

    In more technical terms, The Cloud represents a widespread collection of computing assets that function together as if they were a single thing. Those assets may be computers and other hardware, software, data, or some combination of these categories of things. The individual components can also cover for each other, so that if one is lost or damaged, the others continue to function as if nothing happened. A primary advantage of cloud computing is that it’s multiply redundant: if one part fails, then other parts will take over and unless you’re responsible for administering that part of The Cloud, you’ll ideally never know anything happened. In this sense, it’s like the old notion of a redundant array of inexpensive (now, “independent”) disks (RAID).

    When this approach works, it works very well indeed. The Internet itself is a great example of the overall principle, since it was designed right from the start to be a distributed entity so multiply redundant that it could survive a nuclear war by rerouting traffic around any pathways or nodes in the network that were eliminated. Though nobody’s tried to test the nuclear survivability of the Internet in a real-world trial, there have been many glitches that did their best to take the Internet or large parts of it down—usually as a result of human malice (e.g., denial of service attacks) or human incompetence (e.g., cutting a backbone cable that conveys the majority of a service provider’s service while digging a ditch).

    Because of this power, cloud computing should be part of everyone’s strategy. For example, I use DropBox’s file storage service to automatically back up my data, so if the roof falls on my computer, my files will still be safe on the Dropbox servers*. There are many other advantages. For example, you gain access to a dedicated staff of hard-core geeks who take care of your part of The Cloud to ensure that it stays up and running and that your data remains safe. I take reasonable precautions with my computer and data, but I don’t tend to it 24/7. I’ve got better things to do with my days and nights. (So do the geeks, but they get paid for their 8-hour shifts doing this work.)

    * If you don’t have a DropBox account, contact me by e-mail (ghart@videotron.ca) and I’ll send you an invite. The service is free, and accepting an invite gets you 256 Meg more storage than you’d get if you sign up on your own. Then you can invite all your friends and earn 256 Meg of additional storage for each one who accepts your invitation.

    If The Cloud is so wonderful, why do I remain so intensely skeptical about it? In part, because of the hype it’s been attracting. All you hear about are the benefits, and nobody warns you of the drawbacks. In the rest of this article, I’ll provide some suggestions of what those drawbacks might be, how they can turn a nifty coherent cluster of interacting droplets into a batch of damp floor, and how to protect yourself from such problems.

    The first thing to keep in mind is that The Cloud is still in its early days, particularly compared with the Internet as a whole. Thus, it’s still being refined and hasn’t yet reached the same mature state of reliability as the Internet. A related problem is that there is no one “The Cloud”; rather, it’s a large collection of related services, with some overlaps and many non-overlapping areas, and everyone seems to define and implement it at least slightly differently. This is why DropBox, for instance, retains remarkable availability and security. In contrast, Apple’s ongoing availability problems with its iCloud service are a good example of why this is problematic: if you can’t rely on a Cloud-based service... well, you can’t rely on it. Duh! It’s become something of a truism that it generally takes three tries to get a design right, and only the oldest cloud services are working on their third full iteration.

    The subject of availability leads us to the important concept of a guarantee of service: a key service must be available when you need it, else it’s useless. This is particularly true when the cloud is used to provide software as a service, as in the case of Microsoft’s Office 365, which provides access to software such as Word via your Web browser. Microsoft has done this right in many ways: availability has thus far been pretty good, and if the service is down, you can keep working from a copy of Office installed on your computer. (If you're using their OneDrive service, you'll have access to all of your documents both via the online service and via your computer; they're kept in synch.) This is crucial for someone like me who spends five days a week earning their living using Word. The flip side is that if your computer dies, you can move to another device (another computer, but also increasingly a tablet like an iPad) and pick up where you left off. This is similar to the IMAP e-mail approach, in which your messages are stored on your service provider’s computers, but you can download a copy of the messages to deal with when you’re not connected to their computers.

    Immaturity of the technology also means that security is an issue, and an increasingly important one. Like any version 1.0 or 2.0 product, The Cloud still has some holes. In the old pre-Cloud world, someone who wanted to break into your data only had one point of access: the one device that stored all of your data. With the cloud, your data may be stored across dozens or even hundreds of computers, each of which represents a point of failure. When a security problem is discovered, managers of a service typically “roll out” the fix on only a few computers initially to ensure that the fix isn’t worse than the original problem. Until they’re satisfied the fix works and they can install it safely on the other components of their part of The Cloud, the other parts remain vulnerable. This is particularly problematic because even though each implementation of The Cloud is somewhat different from all others, all implementations rely on certain shared protocols that let the different services work together. This can lead to widespread security problems when one of those shared protocols is compromised. Unfortunately, when you depend on a Cloud service, you also depend on its providers aggressively testing for such problems and responding rapidly when problems are revealed. When no one company is responsible for maintenance of something as important as one of the underlying protocols, it can take some time for problems to be detected and fixed.

    The Cloud is a great idea, and I use it judiciously as part of my business and personal computing strategies. But I don’t uncritically accept the hype. To account for the problems, I protect anything important in several ways:

  • I maintain security on my own computer (good antivirus software). And I skim several newsletters to be sure I’ll learn when a serious security problem has been discovered so I can take appropriate countermeasures (e.g., not use a compromised service or insecure software until the problem is fixed).

  • I back up all my data offline (on DVDs), near-line (in a hard drive connected to my computer), and online (via DropBox). If any one source is compromised, my data is safe on the other sources.

  • For the few things that are so important I need additional security, I encrypt the data. If someone should break into (say) DropBox and gain access to my data, they’ll have to break the encryption before they can use the data.

  • I rely primarily on software on my own computer, but have an old backup computer I can switch to if the main computer dies. I’m looking into Office 365 and iPad-based editing, but haven’t yet made this an integral part of my strategy.


  • Distrust any cloud service that doesn’t let you take similar steps to protect yourself.
    blatherskite: (Default)
    Subtitle: Of Sheep and Men

    “[Harold’s] that most dangerous of animals, a clever sheep. He's the ring-leader.”—Eric Idle, Monty Python’s Flying Circus

    “All animals are equal, but some animals are more equal than others.”—George Orwell

    In the world of Aardman Animations, the life of a sheep is not an easy one: torn from bed at sunrise of each day, fed nothing but a few scraps of corn, marched off to the paddock under guard by a snarling dog, locked in a drafty barn again at the end of the day -- and occasionally sheared for the wool on your back, with no compensation for your labor. It’s exactly the kind of “boot stamping on a human face -- forever" world that Orwell imagined in his nightmares. This is hardly surprising, as Shaun the Sheep comes to us from the studio that brought you Chicken Run, a Swedish-cinema-noir-bleak study of man’s inhumanity to man, with the lecture delivered by using The Great Escape to draw parallels between the fate of innocent chickens destined for the meat pie factory with that of men imprisoned by the Germans during World War II. This is not your grandfather’s children’s movie.

    Ahem.

    Just kidding.

    As anyone who’s ever seen one of the Wallace and Grommit shows knows, Aardman Animations has a unique gift for telling gentle, funny, heartwarming stories that are as much a pleasure for adults as for kids. You tend to leave the cinema with a big-ass grin on your face, and Shaun is no exception.

    Plot synopsis: Shaun, our protagonist, has grown bored with the daily grind, and being that most dangerous of animals, a clever sheep, decides to break the mold. With the help of his woolly partners in crime, he tricks the farmer into falling asleep on the job (by having the herd run past his eyes, then behind his back repeatedly, so that the poor farmer finds himself counting sheep* ad infinitum). Once the farmer’s snoring, they place him in the cot of his camping trailer, draw the window shades, and enter the house for a day of recreation, planning to make popcorn and pizza, drink martinis (made from a bouquet of flowers), and watch videos on TV. Unfortunately, the trailer hasn’t been properly secured, and rolls downhill into The Big City, bearing the unwitting farmer. Plot complications ensue, starting with the farmer’s three pigs taking over the house (“While the sheep’s away, the pigs will play”?) and really getting going when the trailer comes to a halt, followed by the farmer emerging and getting bonked on the head, thereby losing his memory and ending up in the hospital. When freedom loses some of its attraction for the sheep, they sneak into town to mount a rescue operation.

    * The level of background detail is phenomenal, since Aardman really pays attention to the minutiae that give a story three-dimensionality and a sense of being real. In addition to the “counting sheep to fall asleep” joke and the fact that little pigs come inevitably in threes, there are dozens of small visual or other jokes along the way. These include the four sheep, camouflaged as humans using stolen clothing, crossing the street in an hommage to the iconic Abbey Road Beatles album image; a signboard for The Big City that lists its sister cities as La Grande Ville, Grossestadt, and Gran Ciudad; a poke or three at the fashion-conscious and trendy; the sheep, being sheep, not knowing human social conventions for restaurants and playing the innocents abroad by emulating the behavior of those around them; a hilarious poke at prison films (including the scene in every cowboy movie ever in which someone busts the hero or villain out of jail); a QR code that went by too fast to capture but that turns out to be an easter egg; and the very-meta road sign labeled “Convenient Quarry” which leads up to the climatic and terrifying (at least, for a couple 5-year-olds in the audience) confrontation with the villain of the story. My favorite was the “baabaashop quintet” pun. For a possibly complete list, see Luisa Mellor's list on the Den of Geek! site. Honestly, how do people spot all these things?

    It’s all good spirited fun, with nobody getting seriously hurt**, no foul language (other than some fowl language from the rooster), clever animation that uses facial expressions and other clues rather than actual words to convey almost all of the dialogue, and a sheer generosity of spirit that will leave you grinning like a fool. Do stay to the end of the credits for yet another easter egg.

    ** However, in a sinister but possibly unintended touch, there didn’t seem to be any statement that “no animals were harmed during the production of this film”, despite a solicitious note that brain injuries such as the one suffered by the farmer are potentially very serious, accompanied by a link to the Headway Brain Injury Association Web site.

    For the official trailer and several other goodies, visit the official Shaun the Sheep Web site.
    blatherskite: (Default)
    Back when I worked for a single someone else instead of 200+ someone elses, my boss used to come to me periodically and ask me to cut a manuscript's length by 50% or more. I rarely had much trouble doing it, even for reasonably good writers who weren't egregiously verbose. Apart from my having a ton of practice applying this skill, it helps that English is highly redundant; our language contains a surprising amount of built-in error-proofing to ensure clear communication.

    But I also have a gift of seeing what’s important and what isn’t. Thus, I’ve often told my authors that it’s possible to tell any story in 50 words or fewer, and when they don’t believe me, I show them. For example, how would I describe “economics” in 50 or fewer words? Here are two thoughts:

  • Cranky mode: “A collection of logical fallacies that stem from the erroneous assumption that Homo economics is common and that markets are fair.” (21 words)

  • Respectful mode: “Sometimes-profound insights into how and why humans make resource-allocation decisions.” (12 words)

  • (Pause to admire how the respectful mode is... ahem... more economical of words.)

    Both could be shortened further using the various tips I’ll present in the rest of this essay. How about something really complex, like (say) genetics? How about: “Cellular computer programs that define how organisms grow, develop, metabolize, reproduce, and pass those programs to their offspring.” Relativity? "The laws that govern time and motion vary as a function of velocity; time, mass, and dimensions behave differently as we approach the velocity of light." And so on.

    In these examples, the key lies in finding the key points and eliminating anything that’s not required to convey those key points. It also helps if you accept the principle that you can’t say everything or provide full details, and shouldn't try. The goal of concision is to convey the essence. Completely explaining any interesting concept takes space, and the more complex the concept, the more space it takes. Consider, for example, that economics, genetics, and physics each require 500-page textbooks just to cover the basis of each discipline. Many of the basics spawn their own textbooks, and so on for subsets of each of those basic points.

    Since this essay scrolls relentlessly past the bottom of your screen, you’ve undoubtedly noticed a certain irony: this essay isn’t particularly short. In my defence, I spend my whole week practicing concision; my weekend essays are the textual equivalent of putting on stained sweat pants and a torn t-shirt, swilling beer in a lawn chair, and chatting with a friend. If you’re from the TL;DR (“too long, didn’t read”) generation, and have miraculously read this far, here’s the short version: “Concision’s easy: eliminate the unimportant stuff.” If you’ll allow me a few more words: “You can do it too, with practice.” If you’re willing to read on, here’s the (flabby, verbose) version of the essay. If you want to write concisely:

  • Start by identifying the key points. Then identify and eliminate the “merely interesting” points. Retain only the strongest support for the key points. Use imperative statements (as I'm doing here) when you want to tell someone what to do.

  • Start with a strong outline based on the key points. Don’t waste time or space describing the unimportant stuff.

  • Eliminate repetition. (Deredundantize!) The "rule" that you should “tell them what you will say, say it, then remind them of what you said” works better in oral presentations than manuscripts.

  • Establish the context once, then repeat it only when a detour or digression changes the context and you need to re-stablish the original context.

  • Ruthlessly eliminate adjectives and adverbs.

  • Replace compound verbs and verb phrases with precise, strong verbs: write in a way that confuses the real point = obfuscate. (Most style guides have long lists of verbose phrases and their shorter equivalents. Study them.)

  • Speaking of obfuscation, don’t circumlocute: get to the point.

  • Replace compound words or phrases with precise single words: blog post = essay, pale red = pink, evil man = villain.

  • Watch for implicit redundancies, particularly in clichés and stock phrases: temporary reprieve = reprieve, unfilled vacancy = vacancy, unexpected surprise = surprise.

  • Use metaphors or key words, such as Homo economicus in my definition of economics, that speak volumes to those who understand the lingo.

  • Use possessives, even for inanimate things: the point of this essay = this essay's point = my point.

  • Use pronouns or acronyms judiciously: once you’ve established that the National Aeronautics and Space Administration is NASA, use NASA thereafter. Multi-word phrases such as “our committee” can be replaced with shorter pronouns such as “we” or “us” when the context is clear.

  • Eliminate (1) numbers and (b) letters used to enumerate short phrases; they’re rarely helpful. Turn longer phrases into a bulleted list, particularly if the sentence that introduces the list lets you eliminate one or more recurring words: “Our goals are to: [list]” rather than “Our goals are to..., to..., and to ...”. If you feel the need to use words such as first, second, and third, use a numbered list and eliminate those words.

  • Limit yourself to one strong example; provide two or three only for complex topics with qualitatively different cases or sub-cases.

  • Cite or link to resources external to the text to provide details.

  • Combine sentences by eliminating overlapping elements: “This essay provides many examples of concision. These illustrative examples show...” = “This essay provides many examples of concision that show...”

  • Eliminate the least important parentheticals. (These are words between parentheses, like this sentence, or between commas, like this phrase, that only embellish.)

  • Replace negatives with positives: not alive = dead, not wrong = right.

  • Let the manuscript sit for a day before you revise it. Examine every word under the editorial microscope to see whether it’s crucial or merely “useful” and whether its role might already be served by another word.

  • Get a Twitter account and learn to use it. A 140-character limit focuses the concentration most wondrously. (Try not to cheat by breaking longer messages into two or more parts.)


  • Of course, you can be too concise, particularly when you’re writing fiction and the goal is to wallow in the sheer joy of words. Leo Rosten’s famous joke about “fresh fish sold here daily” illustrates the problem with excess concision: Obviously, sold is redundant; the fish aren’t an art display. Similarly, here: where else would they be sold? Lose daily; if they’re not sold daily, they wouldn’t be fresh. Lose fresh; nobody would buy stinky old fish, and you're not dumb enough to try selling them. The remainder, fish, is also useless; these aren’t dogs or computers. Just display the fish in your window, and everyone will figure out why they’re there without all that redundant verbiage that makes English such a powerful tool and so much fun to play with.
    blatherskite: (Default)
    I’ve been blessed (if you’re me) or cursed (if you’ve been forced to listen to me) with insatiable curiosity and a profound sense of wonder at the universe. Pretty much anywhere I look, I can find something in the natural world to fascinate me. And sometimes my brain flits around from notion to notion like a butterfly with ADHD. Over the years, I’ve accumulated an enormously wide, though often shallow, appreciation for a great many things.

    As the years go by, I’ve tended to oscillate between my scientific training (wanting to name and pigeonhole everything) and simply appreciating things for their own sake, without having to apply a label that fixes them in intellectual formaldehyde. Labels are tremendously useful things; they help us define how things fit together, and knowing how the many parts of the world fit together provides a much more profound understanding of its wonders. But labels also strongly predetermine how we think of things, which can prevent us from seeing beyond the narrow walls of the mental pigeonholes we’ve built to contain them. More importantly, sometimes it’s nice to just enjoy something without having to think of its larger implications.

    I also love reading, witness the overflowing bookshelves in our house.* I’m of the opinion that pretty much anything I read will teach me something new or inspire thoughts completely unrelated to what I’m reading (see above re. ADHD) but that are nonetheless interesting. For example, today, for no reason I can discern, I found myself wondering why police require guns and clubs to subdue potentially violent but not necessarily dangerous citizens. The Romans had a simple and elegant solution: use nets, like those used by the type of gladiator known as a retiarius. For the most part, a skillfully used net should be completely harmless to the citizen, and would be inexpensive enough that every patrol car could have one in the trunk. Heck, officers on foot could probably carry a couple on their belt. Hmmm...

    * Were Shoshanna not equally voracious in her reading habits, this would be a serious problem. Fortunately, we’re highly compatible in this way. Even so, we both made the supreme sacrifice a few years back of weeding out some of our duplicate books and donating them to the staff of a convention.

    During the Iceland trip that I described in the past couple weeks of blog entries, I was talking to our guide, retired geologist Richard Little of Earth View tours about geology and legends. (Richard is an excellent tour guide and organizer, by the way. If you love geology and visiting exotic places, Richard’s a great choice.) Our discussion prompted a memory of a book I’d read more than 30 years ago, Ragnarok: the Age of Fire and Gravel, by the 19th century U.S. congressman, early litcrit guy, and amateur scientist–author (many now say pseudoscientist) Ignatius Donnelly. (This dabbling across multiple disciplines was a common Victorian thing, and it produced both interesting insights and arrant nonsense. So does modern intellectual endeavor, though usually less often.)

    Donnelly wrote this book in an effort to scientifically explain the curious coincidence of how certain geological evidence strongly suggested a large cometary impact that strewed similar types of rock and gravel around the globe. Unfortunately, Donnelly was writing well before glaciation was fully understood and before plate tectonics was being seriously considered by geologists (i.e., before Wegener began musing about continental drift in the early 20th century). Plate tectonics does a far better job of explaining the evidence. But what Donnelly got right (as subsequently confirmed by substantial geological evidence) is the fact that all kinds of large rocks and possibly even comets periodically strike the Earth, and that people who were alive at the time would have seen the larger impacts and tried to incorporate them in their body of myth.

    It's been 30+ years since I read Donnelly’s book, but my memory is that it's a fascinating example of 19th century amateur scientific sleuthing and did a plausible job of explaining the available geological data. I remember the writing as charmingly antique (i.e., that Victorian style thing) and I remember devouring the book in only a few days. Donnelly turned out to be wrong because, of course, his knowledge was incomplete and, like many amateur scientists, he perhaps was unaware of how much data real scientists amass in their efforts to understand. But apart from the lesson in the history of science, what fascinated me about Donnelly’s book was that he took the second part of this idea and ran with it: the idea of how scientific phenomena can be incorporated in a culture’s myths. Thus, Ragnarök represents one of the early efforts to subject myth and legend to a scientific test to see whether there was a plausible scientific explanation for the myth. Here, Donnelly was specifically investigating the Icelandic/Norse Ragnarök myth): a large comet striking Earth would almost certainly carve a fiery trail through the atmosphere (the fire part of Ragnarok), leave a trail of debris and signs of an immense impact (the geological evidence Donnelly mustered), and create a mini-ice age if it threw up enough dust (the ice part of Ragnarok). Donnelly provides examples from several other cultures to support his hypothesis.

    This notion blew my young mind: the mere idea that disciplines as different as science and history could be combined in highly productive ways that took advantage of the different strengths of these different ways of thinking was taken as a given by the Victorians, but this kind of interdisciplinary cross-pollination has subsequently fallen out of favor. As a result, and a sad one at that, it isn’t being done nearly as often as it could be: professionals in various disciplines tend to work in their own isolated silos rather than working together to share their expertise. Whatever else one might say about Donnelly, he provides an example of several things: that amateurs can enrich our way of seeing the world, even when they’re wrong; that none of us can master all subjects, and that the amateur’s desire to understand multiple disciplines is best achieved by collaborations between professionals in these disciplines; and that (for me) understanding why an author’s thesis was right, wrong, or somewhere in between is itself a source of inspiration.

    Another example of this hybrid scientific/historical approach to exploring deep history is the notion that the great flood of the Judeo-Christian Bible represents an oral history of the prehistoric flooding of the Mediterranean basin that occurred when the land between Gibraltar and Africa was eroded away, allowing the Atlantic to flood into the basin. Unfortunately, the flood timing doesn't seem to support this possibility; that flooding is estimated to have occurred more than 5 million years ago, well before modern humans evolved (ca. 300 kaBP for Neandertals). The flooding of the Black and Caspian seas, between 16 kaBP and 7 kaBP, is a more likely candidate for the source of this myth. Of course, the fossil evidence is also incomplete and fragmentary, so its possible that the Neandertals originated much earlier than 300 kaBP and that even older branches of the human lineage were much smarter than we currently believe and could have been around and verbal by the time of the flood. I’m not convinced, but it’s fun to play with such notions. Julian May’s Pliocene Exile series has a ton of implausible fun with these notions. So even if the science and history are suspect, it can still lead to some fun ideas.

    The point I’m trying to make in this essay relates to the excitement provided by new sights, new ideas, and new connections among previously unconnected facts. In a sense, it’s less important whether the idea is correct than that it’s exciting. There’s always time to explore the idea using whatever tools you prefer (science, psychology, culture, whatever) and find out whether it’s plausible; it’s the exploration that’s important. The world’s a fascinating place, and sometimes idle speculation leads to even more fascinating insights, as in the case of Wegener following the chain of inspiration provided by suspiciously similar continental boundaries and inspiring a whole new field of geology (plate tectonics). Sometimes the exploration seems futile, as in the case of the Mediterranean floods, but can still result in good stuff (fiction in this case). The journey is as important as the destination, as is true in so many areas of life.

    When I come across something that strikes me as cool, I want to share it with everyone I can trap into listening so they can share some of my sense of wonder and excitement. That’s a major reason why I write so much nonfiction, particularly related to writing and editing. I want other editors and writers to benefit from what I've learned. It's also why I blog about my vacations and take hundreds of photos: when I get home, I share the collection with anyone who expresses sincere interest so they can share some of my sense of wonder. (Shoshanna usually boils them down into a much shorter collection so as not to bore those who express only polite interest.) It’s my way of making the world a more wonderful place, one thought at a time.
    blatherskite: (Default)
    June 29th in Iceland
    June 30th in Iceland

    (Next few updates will be delayed... long travel/hiking days.)

    Profile

    blatherskite: (Default)
    blatherskite

    Syndicate

    RSS Atom

    Expand Cut Tags

    No cut tags