Before diving into the meat of this essay, let me define a few terms related to how software is developed and tested:
Audience analysis is how you begin the development process: you spend some time thinking about how people are likely to use your software, confirm these suspicions with real users (if possible), and you plan accordingly; that is, you design the software to support its users during the common tasks they will be performing with the software. Ideally, you use the some form of Pareto optimization, in which you prioritize the subset of the product's features that are used most often or that provide the maximum benefits to the maximum number of users, then add more features as time and resources permit. I've written extensively on this subject.
Alpha testing is what the programmers do, often with help from a company's "quality assurance" or testing staff, before they consider that a product is ready to be unleashed on its eventual users. In an ideal process, you ensure that all the software features work as required (based on the audience analysis), and fix all the bugs you discover while you're validating the code that you've produced; you might do this yourself, or possibly with a colleague's help, depending on the software development culture at your employer. In practice, programmers and quality assurance staff rarely have time to do either task to their own satisfaction, because software release schedules are driven by marketing, not by the programmer's desire to release a product they can be proud of.
Beta testing is what happens after alpha testing. In this step, you release your best shot at producing a stable, usable product to a subset of its users, most of whom are not employed by your company. This group then tests the software to destruction to make sure it works. The principle is that several hundred people (sometimes thousands) banging away at the program in ways you hadn't anticipated will reveal subtle flaws you didn't notice during your alpha testing, thereby giving you time to fix them before you release the "final" version of the product. (I put "final" in quotations because as any computer user knows, software is never final. There are always subtle bugs that go undetected or that are sufficiently rare a company figures it can wait to fix them. And each new round of patches, fixes, and updates tends to introduce new errors. This isn't professional incompetence; it's inherent to the nature of any complex system, and software is pretty damned complex.) Beta testers receive the software for free during the testing period, and often receive a free license for the final shipping version of the software to compensate them for their efforts.
If all three steps go well, the product that is finally released for sale is stable and usable. How well does Microsoft meet these criteria? In terms of its Windows operating system, amazingly well, particularly given the (literally) more than a billion people who are using one of the various versions of Windows. In terms of Microsoft Office? For the Windows version (WinWord), quite well. I've been profoundly impressed by how well Word has worked with each new version that I've installed: it's become more stable, less buggy, and (often but not always) easier to use. But on the Macintosh side (MacWord)? Not so much. MacWord has been bad enough that when I train editors to use Word for editing, I always start with the recommendation that they use the Windows version, even if they use a Macintosh as their primary computer. WinWord has simply been a better product for the nearly 20 years I've been using Microsoft Word. MacWord 2016 promised to bring the Mac version up to parity with WinWord. Did they meet that promise?
Caveat: I haven't used MacWord 2016 myself, since I'm waiting for the first major round of bug fixes before I install it on my work computer. I'm basing this review on the demos I've seen, a few online reviews, and my wife's experience with the first shipping release of the softare. The TLDR (too long, didn't read) version: My wife, who's been working with word processors for something like 30 years, got so frustrated with Word 2016 that after a couple days of cussing it out, she abandoned it and returned to Word 2008 -- which is also severely flawed, but is at least stable.
In terms of audience analysis, MacWord 2016 is a nice move in the right direction. One of the horrible problems with MacWord has been how radically it differed from WinWord in its interface. Microsoft has always justified this based on the outdated notion that users of the two operating systems have different expectations, an argument that has some merit. However, this argument ignores three key points about real-world use of Word: First, most computer users are now proficient at switching interfaces when (for example) they move from Internet Explorer to Firefox or Chrome. The difference between Windows and the Mac is now largely irrelevant, and is often less disruptive than switching between programs under the same operating system. Second, and much more important, releasing versions of Word with sometimes radically different behavior means that even after users figure out the interface differences, they can't rely on the software to behave identically. Third, and most important in the working world, these two factors create an impossible situation for trainers, who must master at least two different versions of the software (Mac and Windows) and find ways to teach the two groups of users about the differences. At a crude guesstimate, my book on onscreen editing is about 25% longer than it would need to be if the interface and behavior were consistent between versions.
With Word 2016, Microsoft has finally taken some significant steps to make the user interface look more similar between Mac and Windows, but the software still doesn't behave the same on both platforms. A large part of the problem results from a failure in the alpha testing process, which let profound bugs slip through that should never have been revealed to the eventual audience, and by insufficient time allocated to beta testing, which would have given the programmers time to solve those problems.
In terms of alpha testing, Microsoft seems to have fallen on its face -- again -- with MacWord. Many features that they broke when they first released Word 2011, and took months or years to fix, were broken again in the first release of Word 2016 (e.g., not displaying the correct tab of the Ribbon for the current context, a "garbage collection" error that generates the alarming message that there's not enough disk space to save the open file, even if a ton of space is free). This wouldn't be such a problem if the beta testing process had been given enough time to detect such problems and let the programmers fix them. But the current version of Word 2016 is a shipping product: you have to pay money for it. That's entirely inappropriate for a product this buggy.
Bottom line: Microsoft should be ashamed of this performance, and should not expect customers to pay for such a shoddy, unfinished product. My recommendation that professional editors should stick with WinWord still stands. In a month or two, after Microsoft releases the first major service release for MacWord to fix these egregious problems, I'll be willing to risk installing it, and will then report back on whether they've come close to parity with WinWord -- or even produced a usable product.
Audience analysis is how you begin the development process: you spend some time thinking about how people are likely to use your software, confirm these suspicions with real users (if possible), and you plan accordingly; that is, you design the software to support its users during the common tasks they will be performing with the software. Ideally, you use the some form of Pareto optimization, in which you prioritize the subset of the product's features that are used most often or that provide the maximum benefits to the maximum number of users, then add more features as time and resources permit. I've written extensively on this subject.
Alpha testing is what the programmers do, often with help from a company's "quality assurance" or testing staff, before they consider that a product is ready to be unleashed on its eventual users. In an ideal process, you ensure that all the software features work as required (based on the audience analysis), and fix all the bugs you discover while you're validating the code that you've produced; you might do this yourself, or possibly with a colleague's help, depending on the software development culture at your employer. In practice, programmers and quality assurance staff rarely have time to do either task to their own satisfaction, because software release schedules are driven by marketing, not by the programmer's desire to release a product they can be proud of.
Beta testing is what happens after alpha testing. In this step, you release your best shot at producing a stable, usable product to a subset of its users, most of whom are not employed by your company. This group then tests the software to destruction to make sure it works. The principle is that several hundred people (sometimes thousands) banging away at the program in ways you hadn't anticipated will reveal subtle flaws you didn't notice during your alpha testing, thereby giving you time to fix them before you release the "final" version of the product. (I put "final" in quotations because as any computer user knows, software is never final. There are always subtle bugs that go undetected or that are sufficiently rare a company figures it can wait to fix them. And each new round of patches, fixes, and updates tends to introduce new errors. This isn't professional incompetence; it's inherent to the nature of any complex system, and software is pretty damned complex.) Beta testers receive the software for free during the testing period, and often receive a free license for the final shipping version of the software to compensate them for their efforts.
If all three steps go well, the product that is finally released for sale is stable and usable. How well does Microsoft meet these criteria? In terms of its Windows operating system, amazingly well, particularly given the (literally) more than a billion people who are using one of the various versions of Windows. In terms of Microsoft Office? For the Windows version (WinWord), quite well. I've been profoundly impressed by how well Word has worked with each new version that I've installed: it's become more stable, less buggy, and (often but not always) easier to use. But on the Macintosh side (MacWord)? Not so much. MacWord has been bad enough that when I train editors to use Word for editing, I always start with the recommendation that they use the Windows version, even if they use a Macintosh as their primary computer. WinWord has simply been a better product for the nearly 20 years I've been using Microsoft Word. MacWord 2016 promised to bring the Mac version up to parity with WinWord. Did they meet that promise?
Caveat: I haven't used MacWord 2016 myself, since I'm waiting for the first major round of bug fixes before I install it on my work computer. I'm basing this review on the demos I've seen, a few online reviews, and my wife's experience with the first shipping release of the softare. The TLDR (too long, didn't read) version: My wife, who's been working with word processors for something like 30 years, got so frustrated with Word 2016 that after a couple days of cussing it out, she abandoned it and returned to Word 2008 -- which is also severely flawed, but is at least stable.
In terms of audience analysis, MacWord 2016 is a nice move in the right direction. One of the horrible problems with MacWord has been how radically it differed from WinWord in its interface. Microsoft has always justified this based on the outdated notion that users of the two operating systems have different expectations, an argument that has some merit. However, this argument ignores three key points about real-world use of Word: First, most computer users are now proficient at switching interfaces when (for example) they move from Internet Explorer to Firefox or Chrome. The difference between Windows and the Mac is now largely irrelevant, and is often less disruptive than switching between programs under the same operating system. Second, and much more important, releasing versions of Word with sometimes radically different behavior means that even after users figure out the interface differences, they can't rely on the software to behave identically. Third, and most important in the working world, these two factors create an impossible situation for trainers, who must master at least two different versions of the software (Mac and Windows) and find ways to teach the two groups of users about the differences. At a crude guesstimate, my book on onscreen editing is about 25% longer than it would need to be if the interface and behavior were consistent between versions.
With Word 2016, Microsoft has finally taken some significant steps to make the user interface look more similar between Mac and Windows, but the software still doesn't behave the same on both platforms. A large part of the problem results from a failure in the alpha testing process, which let profound bugs slip through that should never have been revealed to the eventual audience, and by insufficient time allocated to beta testing, which would have given the programmers time to solve those problems.
In terms of alpha testing, Microsoft seems to have fallen on its face -- again -- with MacWord. Many features that they broke when they first released Word 2011, and took months or years to fix, were broken again in the first release of Word 2016 (e.g., not displaying the correct tab of the Ribbon for the current context, a "garbage collection" error that generates the alarming message that there's not enough disk space to save the open file, even if a ton of space is free). This wouldn't be such a problem if the beta testing process had been given enough time to detect such problems and let the programmers fix them. But the current version of Word 2016 is a shipping product: you have to pay money for it. That's entirely inappropriate for a product this buggy.
Bottom line: Microsoft should be ashamed of this performance, and should not expect customers to pay for such a shoddy, unfinished product. My recommendation that professional editors should stick with WinWord still stands. In a month or two, after Microsoft releases the first major service release for MacWord to fix these egregious problems, I'll be willing to risk installing it, and will then report back on whether they've come close to parity with WinWord -- or even produced a usable product.