ast week, I was at the Car Wash Show in Las Vegas speaking about the connected consumer and what it means for the future of the car wash industry. And like so many of the industries I work with, there are massive changes coming their way. Here are some of things I shared about how the next shift in technology is transforming this age-old business.

The car wash industry is a highly fragmented marketplace, but that is set to change as the industry undergoes the next leg in its digital transformation. The fragmented nature is likely one of the reasons the industry has been slow to adopt certain technologies compared to other industry segments but that is changing rapidly. Technology adoption is transforming the front-facing consumer experience and the back-of-the-house operations. Some of these changes are changing the nature of the business, for example, by fostering unlimited wash subscription services, one of the largest areas of growth for the industry. But this is just the start. Consumer facing technologies like license plate readers and digital kiosks will enable a new level of customization, especially important in an environment where labor is hard to find. Technologies like Lidar, used by autonomous vehicles to create 3D maps of their physical environments, are being used to provide greater precision car washes and in the process use less water and few chemicals.

The next big step for the car wash industry is to connect the front-of-the-house with the back-of-the-house. That means connecting CRMs systems with digital kiosks so businesses can deliver a personalized experience at scale. It means gleaning insights from the data exhaust of increasing digitized and connected equipment. And it means using that data to make better informed decisions about things like promotions and staffing.

Of course, these are just some of the changes underway and many are coming to other industries, but I’m excited to see how the car wash industry is beginning to use technology and data to transform and grow.

Given the early mission of the internet to democratize information, it’s somewhat surprising that we don’t see more nonprofits operating in the tech space. Wikipedia is the strongest example of what that model might look like. In New York, The Driver Cooperative (TDC), is trying to launch a ridesharing app that would get closer to a nonprofit model:

When it rolls out to the pub­lic ear­ly next year, TDC will become New York City’s first work­er-owned rideshar­ing plat­form — owned by the dri­vers them­selves, rather than by big investors and exec­u­tives. Its founders’ brazen idea is that TDC can actu­al­ly gain a com­pet­i­tive advan­tage over Uber and Lyft — sav­ing mon­ey and fun­nel­ing those sav­ings back to dri­vers — by doing away with the most exploita­tive prac­tices of that dom­i­nant duop­oly. ​The way the [Uber] mod­el is orga­nized is extrac­tive. It takes out the mon­ey and doesn’t give back much. Imag­ine a com­pa­ny that doesn’t have any prof­its, but has cre­at­ed bil­lion­aires,” Lewis says. ​That mon­ey comes from drivers.”

TDC hopes it can also change the cost structure which could make them the low-cost provider:

By com­bin­ing the pur­chas­ing pow­er of all the mem­bers, they hope to low­er expens­es on costs like gas and insur­ance — expens­es that Uber and Lyft dri­vers must han­dle on their own. They project that this should all add up to 8 – 10% high­er earn­ings for dri­vers on every ride, even while being able to beat their com­peti­tors on fare prices. And if the coop has any prof­its left at the end of the year, they will be paid out to dri­vers as dividends.

It is very difficult to compete against an entrenched company in winner-take most markets. Being able to offer a comparable product at a lower price helps. Taking on nonprofit status could help companies achieve that. So as more markets mature, we might see competitors arise more frequently as nonprofits.

Here’s the full article on TDC.

Breast cancer is the most common cancer in women worldwide. Early detection and treatment can lower mortality rates. But clinicians still fail to identify breast cancer about 20 percent of the time (false-negative results). Clinicians also identify cancer, when there is no breast cancer present (false-positive results). Studies suggest 7-12 percent of women will receive false positive results after one mammogram and after 10 years of annual screening, more than half of women will receive at least one false-positive recall.

False-negative results provide a false sense of security and could ultimately hinder treatment effectiveness. False-positive results can cause anxiety and lead to unnecessary tests and procedures. Another hurdle in identifying breast cancer is a shortage of radiologists needed to read mammograms.

Researchers have developed an AI system that surpasses human experts in breast cancer identification. Their study results were recently published in the journal Nature.

We show an absolute reduction of 5.7% and 1.2% (USA and UK) in false positives and 9.4% and 2.7% in false negatives. We provide evidence of the ability of the system to generalize from the UK to the USA. In an independent study of six radiologists, the AI system outperformed all of the human readers…We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%.

The study results are promising. The AI system outperformed six radiologists and also lowered missed cancer diagnoses on the U.S. sample by 9 percent and mistaken readings of breast cancer by 6 percent. It also produced results across populations, something many AI systems have yet to produce. The researchers didn’t go as far as to suggest their AI system would replace humans.

The optimal use of the AI system within clinical workflows remains to be determined. The specificity advantage exhibited by the system suggests that it could help to reduce recall rates and unnecessary biopsies. The improvement in sensitivity exhibited in the US data shows that the AI system may be capable of detecting cancers earlier than the standard of care. An analysis of the localization performance of the AI system suggests it holds early promise for flagging suspicious regions for review by experts.

Beyond improving reader performance, the technology described here may have a number of other clinical applications. Through simulation, we suggest how the system could obviate the need for double reading in 88% of UK screening cases, while maintaining a similar level of accuracy to the standard protocol. We also explore how high-confidence operating points can be used to triage high-risk cases and dismiss low-risk cases. These analyses highlight the potential of this technology to deliver screening results in a sustainable manner despite workforce shortages in countries such as the UK.

At the same time, it becomes more difficult to make the case for approaches that are exclusively human. It is hard to imagine patients, insurance companies, and others won’t demand AI systems augment what humans are doing. This is especially true in healthcare. But will also likely become increasingly true in other domains. What tasks would you want humans to do alone if you know that you can get better results (greater accuracy, faster, etc) when human capability is augmented with AI systems.

Humans will need to learn how to incorporate these type of AI systems into their workflow. The next big step for AI seems to be “operationalizing AI.” This is likely a decade in the works, but slowly you will see individuals figuring how to best work within environments that are being redefined by AI systems.

A key tenet of innovation and technology is the premise that we deploy technology at a higher level (more widely and with greater frequency) as it moves from a scarcity to a surplus. I call this Law of Technological Abundance.

The classic example I like to use to describe this phenomenon is the case of computing power in the early 1980s. In 1981, Xerox released the Xerox Star. It was the first commercially available computer to provide a graphical user interface (GUI). It sold for around $75,000 and didn’t go on to garner much commercial success. However, just three years later Apple would introduce the original Macintosh. It sold for $2,500 and would go on to receive wide appeal – becoming the first computer with a GUI to obtain mass market popularity.

So what happened between 1981 and 1984? The price of computing power declined precipitously during this period. Remember that a GUI is essentially a redundant feature. We don’t need it to navigate the computing environment now and we didn’t need it back in the early 1980s. Prior to the introduction of GUIs, we navigated the computing environment by typing text commands through a command line interface (CLI). Adding a GUI in 1981 placed added strain on a very scarce resource – computing power. I’ve talked to engineers from a wide array of companies who were working this problem in the early 1980s and I often get a very similar story – they kept running into hardware constraints around available computing power. However, as computing power goes down in price, one can start to use it for nonessential applications. Or in the case of GUIs, redundant applications.

This same phenomenon played out in digital storage in the late 1990s and early 2000s. IBM introduced the first hard drive in 1956 but it would be another forty or fifty years until we could begin treating it like surplus. Prior to the early 2000s, digital storage was a scarcity that we used sparingly. I can still remember going through my hard drive and deleting files because my hard drive was full. However, what was once in scarce supply quickly became an ample resource. While we might have some limited constraints on certain devices (ie mobile phones) today, rarely do you see someone delete files or photos because they are completely out of space. These days we simply move photos and files to other places. We offload photos from our phones to computer hard drives or cloud-based storage. In many instances today, we have the same file stored in multiple places. We don’t think much about this redundancy because digital storage has moved from a scarcity to a surplus and we can therefore essentially waste it.

Economists, engineers, and scientists use the term “law” extremely sparingly. There are very few laws because economists, engineers and scientists are hesitant to suggest absolute truths that hold in all cases. One of the great powers of Moore’s Law is that it proved itself worth of the “Law” moniker. It is within this context that I recognize I am making a rather bold claim about the relationship between price and utilization.

The Law of Abundance is influenced (read: amplified) by digitization so where you see digitization playing out, chances are you’ll also see the Law of Abundance. The Law of Abundance has materialized in the deployment of digital sensors for example. Our first mobile phones were analog devices, but over time we moved towards digital ones and subsequently began embedding sensors. In 2007, Apple introduced the original iPhone and while this launch is noteworthy for a number of reasons, it would be three years later when Apple introduces the iPhone 4 that we see the Law of Abundance materialize. The iPhone 4 was the first to include two image sensors (adding the front facing camera) and two digital microphones. The first digital microphone was used to replace the previously used analog microphone which captured the user’s voice. The second digital microphone was placed on the rear of the device and is used to cancel out extraneous noises and improve call quality.

I don’t think we’ve scratched the surface of all that lies before us. The Law of Abundance will drive up the sensor count on our mobiles phones (and everywhere else) a multiple of what it is today. Look at cameras for example. As the price for image sensors has declined, these sensors have moved from a scarcity to a surplus and we’ve deployed them widely. We are not only using image sensors in our mobile phones, but across an increasing diversity of products, We use cameras in vehicles to see what’s behind us when we reverse but we have also begun embedding them into the front of our vehicles in order to enable features like adaptive cruise control. We use them in our thermostats and throughout a wide array of other newly digitized objects.

At CES this past week, I saw sensors everywhere. I estimate that of the 20,000 new products launched during the four day extravaganza, 75 percent or even more included some type of sensor and many of them included multiple sensors. Take for example the Withings Thermo, which was released at CES. The Thermo is a thermometer you hold to your forehead. It has 16 temperature sensors embedded in it.

The Law of Abundance is taking hold of sensors and it suggests we will eventually be surrounded by millions of them.

A month or two ago I submitted an offer to buy a house. I received the paperwork about an hour before the submission deadline.  Five years ago I would have needed to meet someone in person to sign the required paperwork.  Or possibly I might have been able to receive the paperwork electronically but I still would have been required to print a copy, sign the physical copy, scan it and finally submitted it electronically. Five years ago we weren’t yet in a world capable of supporting an all-digital transaction.

But this time was different.  I received and signed the documents through DocuSign Ink.  I was able to electronically sign the required paperwork which then went immediately to the next parties required to review and sign the paperwork.  I was able to save a copy of the submitted paperwork within my account which I could subsequently access within the app on either my iPhone or iPad.

For the first time I felt like I finally saw the potential of tablets and smartphones.  Don’t get me wrong.  I’m an extremely heavy use of smartphones and tablets. But I’m also a heavy use of traditional notebook computers.  I’ve always been a believer that smartphones and tablets make sense for situations defined by a time/location/task  context.  In other words, tablets and smartphones make sense, but some times traditional computing makes more sense.  This experience called that believe into question.  I’m beginning to think innovation over time can close any chasm between what one can do on a tablet and one can do more easily on a notebook computer.  Notebooks are still far more efficient than smartphones and tablets for some activities (again defined by time/location/task) but I’m no longer convinced that is always going to be the case.

Since 2010 ownership rates have increased significantly – rising from roughly 10 percent at the start of 2011 to over 40 percent today (in another post I’ll talk through why I think tablet ownership rates are actually close to plateauing).  While this ownership rate has largely been driven by entertainment consumption, I’m beginning to think that could change.

In the last three years I’ve witnessed my kids relying almost exclusively on tablets and smartphones. While I’ve largely assumed this was a result of the type of computing activities they are involved in (gaming), I’m beginning to wander if tablets will mature and evolve quickly enough to satisfy their future computing needs. We might soon start talking about “notebook-nevers” as a cohort of heavy computer users who never owned a notebook computer. A recent survey of smartphone and tablet adult owners found 35% of users prefer to access the Internet on their smartphone and 14% prefer their tablet – suggesting a bare majority prefer desktop or laptop computers to access the Internet.

Tablets and smartphones are more personal than traditional PCs – as a result they are redefining what personal means.

One of the next big hurdles for the tablet/smartphone platform is file organization. Apps are largely siloed and as a result related documents are siloed within the apps. Yes, there are cloud storage services like Dropbox or services like Google docs and these services might play a large role. In the recently released iOS 7, Apple redesigned the photo gallery by adding curation features – organizing the photos into “moments.”

An all-digital world accelerates commerce. As “things” become digital or as physical non-digital things gain a virtual and digital identity the speed at which they can move approaches the speed at which digital things can move. This all results in the speed of commerce for both digital and non-digital things accelerating.

In a digital world everything is for sale. Airlines are now starting to auction off upgrades. Professional sports teams now sell seat upgrades during the game. The list goes on.

In a digital commerce world things also approach infinite divisibility. You could imagine eventually prices for upgrades become a function of the intensity of the game or other elements at any given moment. The price of upgrading seats is essentially repriced every minute.

Because prices can change constantly, market operators can ensure that they are always priced at the market-clearing price – always just selling out.

Kraft Foods recently worked with Intel to develop a vending machine capable of making product recommendations based upon your demographic make-up or other details. The vending machine uses a camera mounted on the front of the vending machine to identify characteristics such as age and gender though could eventually monitor a host of different characteristics. Recommendations showing positive correlations to identified characteristics provide another example of ways in which digitizing information accelerates commerce.

With a rapid rate of technological change, comes another technological development. In the digital world that we now live in, it has become common we often create technology to stop other technology from working which in turn spawns additional innovation designed at stopping that technology from working.

Wide deployment of home telephony spawned the creation of the answering machine.  While it’s intended use was to field calls while the homeowner was away, it was frequently used to help individuals avoid certain incoming calls. Caller ID frequently serves a similar purpose today. Universal home telephony service spawned technology use that places limits on the prior technology.

Over the last thirty years we’ve built a ubiquitous cellular network. We are now building technologies to limit and control the ubiquitousness of this technology. As mobile phones have become tools used to detonate bombs for example, we are deploying technologies to block cellular reception in presubscribed areas. We are exploring using technology to stop your cell phone from working while you are the driver of a call.

There are a plethora of examples around us. The iterative nature of technology has us constantly attempting to curb technology application. Technology seeks ubiquitousness. In turn we use technology to guide, mold, and hinder that ubiquitousness. Constantly trying to stop what we’ve started.

I’m at the beach this week and took along my Chromecast.  A few thoughts from the week:

  1. It works awesome.  A very seamless experience.  I used the new iOS app to set it up and it worked perfectly from the start. A great user experience which will enable it to go a very long way. Too many devices miss a seamless and quick out-of-the-box experience.
  2. Rethinking the home theater. Chromecast and future services and devices like it will cause us to rethink the traditional home theater. These devices and services will cause us to rethink the hardware systems we use and the way in which we use them. Chromecast essentially opens up a window for streaming through HDMI port. The house we are currently staying in has a traditional dedicated home theater room with an overhead projector. While we can connect the Chromecast to the HDMI port in the back of the projector we weren’t able to support streaming the accompanying audio because that was run through the separate receiver. In all of this, I think Chromecast-like services and ultimately Chromecast-like functionality will cause us to rethink what we do – or want to do – with different hardware configurations.  The role of dedicated home theater rooms will start to evolve as will other viewing areas.  What we watch and where we watch – all has the potential to change.
  3. Local content is key.  To what extent Google will allow local content to be cast remains uncertain. But streaming local content is ultimately key in the traction Chromecast and competing services will garner. Several friends staying with us this week – and being exposed to Chromecast for the first time – remarked Chromecast addressed a problem they were previously trying to solve with long cables.  In almost all of these instances they are looking to stream/cast local content. This is especially true as more screens become Internet-enabled directly.  CEA data shows roughly 30% of LCD TVs being sold YTD are Internet-enabled – up from 23% last year during the same time period. If the TV or other “screened” device are already connected to the Internet the story is less about trying to deliver mainstream media and more about delivering local content like photos or videos.  As an aside – Google casting a tab from within the Chrome browser also worked seamlessly.
  4. Shifting media purchases. In the short-term, Chromecast has the ability to shift media buying channels. When looking to rent something not available on Netflix, I might typically turn to Amazon Instant Video or some other service. But Amazon Instant Video doesn’t currently stream through Chromecast so we found ourselves turning to Google Play instead.  The ability to cast Google Play videos through the YouTube app worked seamlessly.  I’d presume Amazon Instant Video support will come eventually. More importantly will be Ultraviolet support through apps like Vudu – allowing users to stream their growing digital libraries.

There has been much written about how digital is broadly changing news dissemination, but beyond simple replacement of the paper alternative and an acceleration of “news” to satisfy an always-on consumer, I think there is a deeper change afoot.

Yes, “traditional news” is undergoing significant change through the direct and indirect influences of digitization – something that has largely been well covered.  But an always-on digital consumer is also driving other changes outside the newsroom. For example, immediately following the Boston bombings, the Boston Globe set-up a spreadsheet (ok technically a Google doc) on their site where individuals could post offers for help and those seeking could find help.  The Globe has left the doc online as a tribute to all those who offered up help the ensuing aftermath.

In this small example, I see the role of the newspaper changing.  While it seeks to remain an important distribution platform for news it might also find relevancy elsewhere. In the early history of the United States, the colonial tavern was a physical place for news dissemination – something that was largely replaced by newspapers as the young nation urbanized.  But beyond just news dissemination it was also often the physical hub of the community.   Newspapers are well positioned to reinvent themselves – doing what the Globe did – and being the digital hub of the underlying communities.

At rare times in our collective history a tech product is introduced – one that is so new it doesn’t have a market.  It’s capabilities and functionality are largely undefined – left to others to uncover. These devices and services offer tremendous promise. They offer promise about what can come to fruition through the right application of the technology.  They offer promise of what tomorrow can bring.  The future is wide open to these products. They are positioned to disrupt.

Edison originally thought his phonograph invention would be used for deathbed recordings and limited dictation.  But applied – it has changed entire industries and ultimately our culture. Philo Farnsworth’s television system did the same. Then the personal computers of the 1970s.  More recently mobile telephony followed by today’s smartphone.

These product introductions might not always make sense when taking out of this context.  In 2010 we saw it with 3DTV. Ultimately, 3DTV didn’t fulfill market expectations within the timelines prescribed to it.  But the initial buzz around the initial launch was driven by the belief that it might change things as we know it.  Tablets in 2011 held the same position.  The jury remains out if tablets will fulfill their destiny of fundamentally changing how we computer and  interact with other experiences (ie TV and “second screen”).

This year we have several technologies with great promise. I believe the appeal of smartwatches encompasses this great promise. Google Glasses fall into this category as well.  New applications applied to a unique form factor of computing power and connectivity have the potential to disrupt how we live our lives.  Unknown applications are waiting to be discovered.

Over the last 24 hours there have been hundreds thousands of articles written about the realization that selling Google Glasses received under the Explorer program is strictly verboten (see here and 1,000+ other places). Outside of some overzealous attorneys, I think (I hope) this restriction is designed with the intention that Google has put this new technology in the hands of those they believe best positioned to help realize the great promise of this new technology.