2. Innovation Happens Elsewhere

At the beginning of the twenty-first century the biggest challenge high-tech companies face is how to innovate effectively. Effective innovation is finding the right innovations to invest in to make a good profit. To many, open source is about Linux and a few other software systems and how to use them to make an inexpensive backroom. But open source is also a practice and a tool for encouraging and harvesting innovations. Open source is a way for companies to engage in highly effective marketing--both in the sense of marketing a company, software, and brand, and in the sense of finding out what products and product features to build.

Like any engine, an open-source innovation engine requires design, engineering, tuning, and maintenance. But to do those things requires understanding the science behind the machine. This is called "innovation happens elsewhere."



Open Source Is a Commons

Open source is fundamentally about people volunteering to work on projects in what could be called the commons; that is, it is about working on things for the public good. The sort of commons we are talking about is composed of things whose most basic value is not diminished by making a copy--in fact, making a copy in many cases increases its value. For example, if a piece of software is not for sale, then its value is not decreased if someone makes and uses a copy of it. If the software implements a collaborative tool, for example, then the additional copy makes it more valuable by increasing the number of people collaborating. Written works and knowledge are also part of this commons. Taking advantage of such a commons can be as simple as just taking what's there--this is permitted--but learning and applying the rules of engagement for the communities surrounding this commons can enhance the exchange and pave the way for putting works important to you in the commons and eliciting various forms of volunteer effort.

Open source is not a small or niche movement. There are tens of thousands of open-source projects--the largest but not the only openly available open-source hosting site, Sourceforge,[^955729] in mid-2004 hosted over 80,000 projects and had over 810,000 registered members. Interest in using open-source code residing in the commons by corporations is strong enough that some are starting to adapt their internal product life cycle and development methodologies to accommodate the nature of open source. And during this period of downturn for the technology portions of the economy, open-source projects are a means for software developers and designers not only to hone their skills on real-world projects but to expand their expertise in useful directions. Working on open-source projects, learning the tools and processes, and gaining expertise in various open-source applications and systems is likely to prove valuable, if for no other reason than that seeing other people's code and style expands your own horizons and makes you a better developer and designer.

[^955729]: http://sourceforge.net

Can the Commons Make a Difference?

Although the commons we're talking about includes other things, it is largely the commons of source code licensed in such a way that many can work on it at the same time. The best-known software in this commons is Linux, which is a sort of Unix operating system. Linux can be viewed as a narrow technological phenomenon--of importance to some companies in the technology sector, but not affecting regular people in their homes and lives. To understand how the commons can benefit ordinary people, it's easier to look at other things in the commons, such as the World Wide Web. The Web is interesting because it is not only within the commons--built by a group effort--but forms part of the infrastructure of the commons. It's worth looking at the growth of the Web a bit to begin to understand the power of open source for business.

For many, the Web is the first authority--not simply ordinary folks but professionals turn to the Web for information. Nearly all the scientific and technical journals the authors read are available online. Many people and most companies and other organizations put up websites and keep them up to date. Some are as ordinary and prosaic as a man talking about his hobbies and cats, and others are elaborate collections of essays and artwork or opinions and software. Today there are approximately 3 million distinct public websites,[^955733] and on average each site has about 450 pages, meaning there are nearly 1.5 billion web pages. And this doesn't count web dark matter, which is information stored in databases that is served up using dynamic web page technology.

The Web was built as a volunteer effort--there was no central planning, only a set of simple protocols and tools, and the idea took off on its own. The core of the Web is a protocol called http (HyperText Transfer Protocol), which enables one computer to request of another a particular resource--usually a web page--via the Internet. This protocol (http) works on top of the basic Internet protocols, and it can be considered a peer of other layered protocols such as email (smtp, Simple Mail Transfer Protocol) and file transfer (ftp, File Transfer Protocol). Three more things were needed from the infrastructure. The first was a way to describe web pages that was not based on a proprietary format. The choice here was the HyperText Markup Language (html). This was originally an extremely simple language--it was not capable of displaying precisely typeset material natively, but it was a language easy to learn. The second was a way to locate pages to display that didn't depend on what computer or operating system was being talked to--the Universal Resource Locator (URL). The third was the web browser.

The keys to the remarkable success of this approach lay in the web browser more than in the other components. The first key was that the early web browsers tried their best to display something regardless of how imperfect the html describing the requested page was. The second was that web browsers could display the source html for the page being displayed. Using this feature, amateur web designers could find a page whose layout they liked and then mimic it for a page of their own. Even if they were not too careful in mimicking, the tolerant web browsers would probably show enough of what they intended to get by.

With this tolerance for imperfectly described pages and such gifts as the early websites that the early creators provided, the Web took off in ways that couldn't have been foreseen. People used existing websites and their openly viewable source code to clone (with variations) web page design and put out their own content. People shared their hobbies and expertise. Organizations with special interests educated the public, and clubs spread interest in their club activities. Scientists and engineers started using the Web to disseminate their research results. Term papers, dissertations, and manias started appearing on the web, so that by 2003 there was almost literally no corner of human knowledge that was not at least mentioned on the Web. With the advent of search engines, it's now possible to find this information easily. It used to take a prodigious and obsessive person to be able to tell you the names of the poem and the poet given only a flawed memory of a line, but now anyone can answer such questions within 5 minutes given a browser and Internet connection.

This growth came predominantly from the individual efforts of volunteerism. In many cases today, individuals pay to put up their websites--in other words, they pay to give something away. The Web has been used for independent news sources, and there are numerous examples in which web-based action has changed the course of history. There is little doubt that the Web has transformed how people work and interact. Weblogs (simple journal mechanisms) are flourishing and gathering enough power to affect politics, and wikis (reader-extendable websites) are being used to construct grassroots dictionaries and encyclopedias.

[^955733]: These data are from the OCLC Web Characterization Project (http://wcp.oclc.org). A public site is one that is openly available for anyone. There are in addition 2.5 million private sites that are password or payment protected, including about 100,000 adult sites, and another 3 million provisional sites that are under construction.

The Commons and Software

Open-source software--and the closely related variant, free software[^955766]--is being increasingly used in mainstream computing, meaning in information technology (IT) departments in companies and for consumer websites. Open source is becoming common through the Linux operating system and the Apache web server. In addition, there are open-source databases, compilers, window systems, programming languages, text editors, software development environments, productivity suites, and just about anything you can think of. The Apache web server is currently the one most commonly used for public websites. The use of the Linux operating system in commercial settings is spreading rapidly, poised to one day become predominant.

Many companies beyond the early pioneers are embarking on more and more open-source or open-source-like projects, particularly those that are building platforms and require ubiquity. Moreover, many companies are adopting open source for some of their internal projects, especially those that are working on internal tools and utilities for which sharing across the company makes sense.

[^955766]: Free software is software whose source is licensed in such a way to guarantee that the source remains open and to encourage making related software free. The concept of free software comes from Richard Stallman and the Free Software Foundation, and free software licenses include the GPL and LGPL. In this book, the term open source is taken to include free software.

Open versus Closed

According to Henry Chesbrough (Open Innovation: The New Imperative for Creating and Profiting from Technology), the traditional strategy for creating new technology and profiting from it is based on the old AT&T/Bell Labs model:

The model is a closed engine in which all innovation is company produced within closed walls. In the past, it not only was possible but common for large companies to corner the market on the smartest people in a technology field. But during the 1990s, this changed. As new companies started up, with the possibility of founders and early employees becoming instantly rich, researchers and advanced developers started moving around instead of staying with the same company for their entire careers. Knowledge was spread and an employee's loyalty to a particular company (and vice versa) became much rarer.

The old model has broken down. As Chesbrough points out, although as recently as 1991 70.7% of industrial research and development (R&D) spending was done by companies with 25,000 employees or more, by 1999 this figure had dropped to 41.3%. Moreover, as we see later, the first-mover advantage, wherein the company with the first product in a new space gains and retains a market-share advantage, is a myth. The newer model is to look for innovations wherever they may pop up and use them carefully by layering on unique value. Proprietary research to innovate still makes sense, but not to the exclusion of looking outside a company's own firewalls.

Use of the Commons: Creativity and Conversations

At the heart of the reasons for a company to use open source is the nature of creativity. Creativity arises from diversity of ideas and viewpoints. Such ideas and viewpoints are spawned from conversations, either through direct dialog with people or through indirect interaction through artifacts such as art, literature, song, and technology.

Creativity

Creativity is regarded by many as a talent that some people are blessed with and others not. Talent, however, is simply what comes easily. For artists, scientists, engineers, and writers, creativity comes from the ability to put together various triggers that they happen on; a trigger is anything that causes a thought to come to mind. Imagine a group of people in the distant past trying to figure out how to achieve better workplace collaboration. When the members of this group observed sports teams operating smoothly, they found many triggers there: specifically defined roles, coaching, and game plans. In fact, sports acted as a set of triggers for understanding and operating on workplace collaboration. So powerful were these triggers that we've adopted some of their names into our vocabulary of business teamwork.

The term trigger is borrowed from The Triggering Town by the twentieth-century poet and writing teacher Richard Hugo, who used it to teach writing poetry:

A poem can be said to have two subjects, the initiating or triggering subject, which starts the poem or "causes" the poem to be written, and the real or generated subject, which the poem comes to say or mean, and which is generated or discovered in the poem during the writing. That's not quite right because it suggests that the poet recognizes the real subject. The poet may not be aware of what the real subject is but only [has] some instinctive feeling that the poem is done. (p. 4)

For Hugo, a trigger is what starts a poet off in a particular direction. That direction can change or the ultimate thrust of the poem can be something else or simply perceived as something else, but the trigger is the creative spark that gets things going. Triggers are in fact ubiquitous in human life. We find them operating in art, engineering, and the sciences.

Looking at literature more broadly we see connections between works--much of our literature is a response to other writings and writers. In art we notice that when one painter started the style later called Impressionism, many others followed the trigger. Where did Impressionism come from? In his essay on the origins of Impressionism, Louis Emile Edmond Duranty quotes a letter from the artist Eugène Fromentin:

A sculptor or a painter has a wife or a mistress who is slim, light, and lively, with a turned-up nose and small eyes. He loves these things in her, even with their faults. Perhaps he even went through a passionate affair to win her. Now, this woman--who is the ideal of this artist's heart and mind, who has aroused and revealed his true taste, sensitivity, and imagination because he has discovered and chosen her--is the absolute opposite of the feminine ideal that he insists on putting into his paintings and statues. Instead he keeps returning to Greece, to women who are somber, severe, and strong as horses. In the morning he betrays the turned-up nose that delights him at night, and straightens it. Consequently he either dies of boredom or brings to his work all the gaiety and effort of thought of a box-maker who is skilled at gluing and who wonders where he will go to have some fun after he has finished work.[^955769]

Or, Duranty goes on to say, the living woman will displace the Greek Venus and we will see instead the slim, light, and lively woman with the turned-up nose. She has become a trigger. And such triggers coming from life--the way light works to form color, the movement and natural poses of ordinary people, and the idea that an actual viewer cannot see everything--gave rise to Impressionism, whose practitioners and works became triggers to other painters and which combined with other triggers to instigate the styles and movements in art we see today.

Brainstorming groups work precisely because the number of diverse triggers available is large and the group is encouraged to make associations that normally would be edited out quickly. For such groups to work well, there have to be three main ingredients: a willingness to contribute ideas with relative abandon, a willingness to surrender ownership and protection of your ideas to the group, and a willingness to accept the responsibility of nurturing and continuously improving any ideas generated by the group as if they were your own. These principles guide not only successful brainstorming groups, but also such diverse activities as writers' workshops, appreciative inquiry, and design charrettes.

Some triggers are legendary. Isaac Newton thought of the laws of gravity after being struck by the idea that an apple would fall from a tree regardless of how high it was while the moon, surely affected by the same force as the apples, remained at a distance. The apple was a trigger to a series of thoughts that led to good science.

In problem solving, triggers include descriptions of characterizations or descriptions of existing knowledge and techniques that cause the problem solver to home in on an approach to try. These descriptions are usually general or abstract, so there is an ability required to see the connection. But additional triggers can appear from anywhere and can have remarkable effects. Many scientists and mathematicians report that the solutions to problems appeared to them while they were engaged in some other activity. Things happening in their lives acted as triggers.

Companies in a particular market use one another as triggers, looking at how the others approach the market and making both small and large improvements, making products where the market has left holes in the product space, and combining product ideas from other companies. For example, it is well-known that Microsoft used the Apple Macintosh as a trigger for their Windows products. Apple used the Xerox PARC Star as a trigger for the Macintosh window system. And Xerox PARC used earlier window-like systems as their triggers, such systems as the Stanford AI Lab's DataDisk system, Doug Engelbart's On-Line System (NLS) from his Augmentation of Human Intellect project at Stanford Research Institute (now known as SRI International) in the 1960s, the later windowing work at MIT on Lisp machines, and Ivan Sutherland's Sketchpad system from 1962. And, of course, all such systems were triggered by paper, artists' canvases, and blackboards.

It's easy to think that triggers can't be of much use in the real business world because there is so much evidence that the company that comes up with an idea first is the one that benefits the most. Coupled with strong intellectual property laws, it must be getting even worse now.

The power of triggers and the relative immunity of a company's success to not being first can be found in the work of Gerard J. Tellis and Peter N. Golder. In their book Will & Vision: How Latecomers Grow to Dominate Markets, Tellis and Golder carefully define pioneers and markets and come to what some business theorists consider counterintuitive findings about the first-mover advantage. Traditional wisdom holds that first movers dominate; one market study showed that in the steady state, market pioneers hold a 30% share of their markets, whereas latecomers hold 13%. Another study found that about 70% of current market leaders pioneered the markets they dominate. Tellis and Golder note that these studies actually measure the effect of early leaders, not the true pioneers, with part of the blame for the flawed studies lying at the feet of the current market leaders who, when interviewed, tend to not know the technological history of their market well.

For example, Procter & Gamble claims it created the disposable-diaper market in 1961, the implication being that they invented the disposable diaper and thereby created the market. In fact, they were simply an early leader; Johnson & Johnson developed disposable diapers (called Chux) in the 1930s for hospital use and manufactured them for other uses by 1936 or 1937. Two other quick examples: Gillette entered the safety razor market in 1903, but a company called Star had introduced a safety razor in 1876. HP entered the laser printer market in 1984, but IBM had one on the market in 1975 (the 3800).

Tellis and Golder define a pioneer as "the first firm to commercialize [a] product" (p. 32) and a market as "a competitive environment in which firms attempt to satisfy some distinct but enduring consumer need" (p. 33). Using these definitions, they found in examining 66 markets that the failure rate for pioneers was almost 65%. In only 9% of the markets were the current market leaders true pioneers--and the average market share of the survivors is now a mere 6%. The real success goes to early leaders--firms that entered the market an average of 19 years after the pioneers and currently have a market share three times the size of the pioneers.

This means that being the first to market is not necessary for success. In the cases Tellis and Golder discuss, a pioneer's inventive efforts form a trigger that the eventual leader picks up on. Improvements are made, often continual improvements as the product matures. But the time span between the introduction of the pioneer's product and the eventual leader's product contains the customer (or user) response to the idea (and technology) and forms its own small commons that the eventual leader uses to gather information and innovation to make the product a success.

Tellis and Golder's point is that a visionary company with enough persistence can come to dominate a market. The vision is usually of the widespread use of a technology product--which implies an acceptably low price--and the persistence is to stick with product development long enough to achieve an acceptable price for an acceptable product.

This brings up another theme important to open source that we explore later-- continuous (re)design. Continuous (re)design is the way that artifacts in the commons are improved--not only their implementations but their designs--by the acts of people improving small pieces, adapting the technology to other applications, and finding new ideas that might apply.

A nearly perfect example is Sony and the videorecorder. Ampex produced the first videorecorder in 1956, which initially sold for $75,000. It recorded only in black and white on 2-inch tape. It was such a hit that by 1960 a mass-produced version was selling for $50,000 to about 100 customers a year. Also in the 1950s, Sony's cofounder, Masaru Ibuka, had a similar vision, but with an important difference. Ibuka's vision was for a low-cost home version of the videorecorder. In pursuit of this vision, he asked his engineers to come up with a videorecorder, which they did for a selling price of $55,000. Ibuka, believing in a home rather than a professional market for videorecorders, asked them to redesign it to cut the price by 90%, which they did. But then he asked them to redesign it to cut the price by another 90%. It took 20 years for the whole engineering process, but eventually (in 1975) Sony released the Betamax, which sold for under $1000 and over the next 15 years Sony's annual sales grew from $17 million to $2 billion while Ampex's never even reached $500 million. And while this was happening, JVC and Matsushita grew to $5 billion between them. The latecomers were selling $7 billion in recorders, but the pioneer, Ampex, was stuck at 6% of the market.

What this means for open source is that doing secret development in hopes of springing a new product on the marketplace to gain the much-coveted first-mover advantage rarely works. In almost every market, the technology is well known and many pioneers have failed or are failing. It is a race to see which firm can come up with the enduring product for a large market, and an enduring market comes from matching users' needs well with an affordably priced product. Matching users' needs requires interacting with them; affordable pricing requires, for software, primarily low maintenance costs.

If there is no first-mover advantage, then it doesn't matter too much whether the early work is done inside a company's firewalls or in the commons. And it doesn't matter that the early work can serve as a trigger to many companies because that happens already. But we return to this theme shortly.

Conversations

The second half of how the commons works for a company is the conversations. These take many forms. The best way to think about this is by analogy to a great city, by which we mean a city where people gather to exchange goods, culture, food, and ideas. Although all cities do these things to a certain degree, the great cities stand out.

Every place great with creativity is a confluence of smaller communities of interest and practice whose proximity serves as triggers for further creation. Renaissance Florence was great not because there were only painters there, but because there were also sculptors, goldsmiths, poets, writers, clerics, architects, builders, and an expectation of great creation. Today in places such as San Francisco, London, Boston, Austin, San Diego, and Prague there is tremendous diversity of thoughts, artists, technologists, chefs, theaters, and workshops. This is why companies and individuals flock to those places; that's why it's invigorating to be in them.

In The Rise of the Creative Class, a recent study on the transformation of work, Richard Florida writes about creativity and places like this:

Creative people... cluster in places that are centers of creativity and also where they like to live. From classical Athens and Rome, to the Florence of the Medici and Elizabethan London, to Greenwich Village and the San Francisco Bay Area, creativity has always gravitated to specific locations. As the great urbanist Jane Jacobs pointed out a long time ago, successful places are multidimensional and diverse--they don't just cater to a single industry or a single demographic group; they are full of stimulation and creative interplay. (p. 7)

In a context like this, communities do what they do best--create a wide-ranging portfolio of resources. In doing this, they are mimicking the great cities in history, which were not simply confluences of diverse and creative people but also a source of rich culture, galleries, books, workshops, teachers, students, cafés, cuisines, architectural styles, and building methods. A great city is creative only if things are created there.

A great city concentrates triggers, creating the context for creativity, and it also brings the possibility of important conversations: between poets and sculptors, between businesspeople and painters, and between clerics and composers. If our software commons is also a community, we can expect other sorts of important conversations: between customers and companies, between designers and users, between designers in one company and designers in another, between developers and users, between marketing and customers, and between developers in one company and developers in another. Under the right conditions, these conversations can do important things for a company. First, the company can rapidly learn how to improve its products. The conversations are direct, which means that the cost of having them can be low and the translation to concrete product-based actions can be efficient. Second, by inviting customers and others into the circle of trust, the realities of doing business--such as delays caused by product cycles and rushed quality assurance--will be less likely to be held against the company. The nature of the relationship is entirely different. It is more akin to the gift economy that we experience in our families and religions than to the commodity economy that dominates business. In such gift-based relationships, the goals are collaboration, joint authorship, and appreciative inquiry. We say more about this later; for now, keep in mind that the sorts of open-source projects talked about in this book combine the commodity and gift cultures.

As mentioned earlier, Tellis and Golder attribute enduring market leadership to two qualities: will and vision. They think of these largely as attributes of the leaders of the enduring market-leader companies, but perhaps they need only be traits of the companies. An enduring market leader is a company that dominates and has dominated its market for many years, often dozens. For this definition to make sense, only mass markets make sense, and this is reflected in their definition of a market as a competitive arena that satisfies an enduring consumer need. Consumer, of course, is not a pretty word and is considered by many to be insulting[^955779] --a company has one major interaction with consumers: It sells to them.

Vision, to Tellis and Golder, has to do with how a person or company envisions the consumers or mass market for a product. For example, the true vision of Sony's cofounder was not the videorecorder but a videorecorder for the masses priced at $500. What he envisioned was not making copies of broadcast material, but millions of faces sitting in front of TVs playing back recorded material broadcast earlier. The trigger of all research aimed at implementing such visions is not the thing itself or its technology but its price. Because achieving a vision requires an almost absurd amount of research and development, companies and their visionaries require an almost absurd amount of willpower and persistence to keep at it until the technical and manufacturing breakthroughs can be made that enable the mass market (pricing) vision to be realized.

If we were to stop here, the lesson would not be important; the lesson would be that it pays to be lucky enough to have the resources to keep working until you get it right. But keep in mind that you need to do the refinements, but it's a lot easier if you have resources from some other source.

An important characteristic of enduring market leaders is that they usually initially deliver a fairly low-quality product that is just good enough for the consumer, followed by a lengthy conversation with the consumer to determine what sorts of features and level of quality are actually required. After this, the enduring market leader's products often eventually outstrip the niche high-quality-product companies' products. In other words, the will or persistence that Tellis and Golder speak of is not akin to stubbornness; it is a form of continuous (re)design.

The pattern is to start modestly and improve according to the dictates of the users, to get something of acceptable quality into the largest marketplace you can and then later improve or add a narrower focus with higher-quality, more inventive products. This pattern has come to be known as "worse is better,"[^955782] and its key is the conversation that takes place between the users of a product or technology and its designers and builders. Unfortunately, the primary exemplars of this pattern are within the technology-for-technologists sector.[^955785]

That is, the users are developers and artifacts that fall into this category include operating systems (Unix), programming languages (C), and text editors (Emacs). The idea is that the users of a technology are provided with an early version of it, which they can extend and improve over time. The early version should be complete enough to be useful, but should not be particularly fully featured, and ideally the usage patterns should not be nailed down completely so that a wide range of possible design and implementation features is feasible. Then by conversing with the users--through email, code exchanges, and design discussions--the technology can be moved forward according to the most dominant and interesting uses.

This approach is called "worse is better" for two reasons: to indicate that the initial release is not what would normally be considered product quality or complete and to emphasize that it is acceptable to stop short of a perfect design. When you can afford the discipline of allowing the users of a technology to help design it and be part of its future, the result is usually designed as well or better than the original designers could have done and is almost always better suited or adapted to its users.

"Worse is better" works by taking advantage of conversations with users, but the key is to listen and design with discrimination--not every suggestion should be followed. In most cases, there will be a handful of surprising ideas that should be grabbed for follow-up. Coming up with new ideas is not as difficult as selecting which ones to implement--the more ideas on the table, the more likely a good one is there ready to be found. The choice is sometimes more reliably made when there are several minds looking at the question, and that's what "worse is better" promotes.

When we put this approach together with the observation by Tellis and Golder that being first with a previously unknown product is not very important, we can see that there is little risk in using the commons for collaborative design and implementation.

Diversity and Selection Versus Continuous (Re)design

There are actually two ways to achieve excellence in design and execution: (1) diversity followed by selection and (2) continuous (re)design. In the diversity and selection approach lots of artifacts are produced and the best are picked. For example, when we visit the Musée d'Orsay in Paris to view the Impressionist paintings, we notice that they're all quite good. Partly this is because the painters were good and they were exploring the beginnings of a new way of painting, but partly this is because we are seeing only the best Impressionist work. When we read the great poems of William Butler Yeats, we are reading only a selection of his work. When we harvest triggers, we are selecting from a diversity.

On the other hand, continuous (re)design takes a good starting place and revises and revises until that starting artifact has become as good as it can be. When a great poet revises heavily, we often see very good work--certainly it demonstrates deep knowledge and practiced craft.

In fact, the best work is always the result of a combination of the two approaches. And both approaches and their implied practices apply to open source and how it moves forward.

[^955769]: "La Nouvelle Peinture," in Charles S. Moffett, Ruth Berson, Barbara Lee Williams, and Fronia E. Wissman, The New Painting, Impressionism, 1874-1886: An Exhibition Organized by the Fine Arts Museums of San Francisco with the National Gallery of Art, Washington, D.C., p. 40.

[^955779]: http://www.investorwords.com defines consumer as "an individual who buys products or services for personal use and not for manufacture or resale." This excludes anyone who might be a collaborator or with whom you might want to have a conversation.

[^955782]: Gabriel, http://dreamsongs.com/WorseIsBetter.html

[^955785]: We would prefer them to be conversations between end-users and designers, but this loop, although it exists, is more tenuous and harder to demonstrate than it should be.

Innovation Happens Elsewhere

Innovation happens everywhere, but there is simply more elsewhere than here. Silly as it sounds, this is the brutal truth: Regardless of how smart, creative, and innovative you believe your organization is, there are more smart, creative, and innovative people outside your organization than inside. In addition, the majority of elsewhere doesn't particularly care to make products in your space. But customers already using a product for real work are in a good position to offer suggestions about the directions in which that product should evolve. Even if such users don't have concrete suggestions, the ways that they use the product can provide hints about how to improve it. Remember, innovation comes from triggers, and the trigger does not have to be aware of what it is triggering. Just as a palm tree near the beach at sunset can trigger a poet's masterpiece without any thought whatsoever, a user using a product or mentioning a circumstance of its use can trigger a major product direction in a designer prepared to receive such a trigger.

As Henry Chesbrough points out, R&D spending profiles suggest that it is less and less common for innovative people to be found at large companies with virtual monopolies in specific technology areas. Not all the smart people work for you, you cannot afford to try to create all the innovations yourself, and you cannot provide enough triggers internally to find the stunning new product idea.

More and more the game is about being connected rather than about domination. The worst thing that you could do would be to allow "innovation happens elsewhere" to become "revenue happens elsewhere."

To succeed, companies need to find ways to use outside innovations and to become part of a distributed fabric of innovation through a combination of licensing and well-chosen gifts. Although the concept of a gift may not at first seem to fit well with free-market capitalism, it might when thought of in the context of collaborating with others to build common commodity-like infrastructure. If it makes business sense in that context, then perhaps it makes business sense in others.

This is what open source is all about: harnessing engines of innovation in software.