Let’s make 2017 the Year of “Prove it” in healthcare innovation

Mahek and I have a running conversation on big company meltdowns (mostly in healthcare). For each one, we discuss who was involved (personalities, investors, consumers), what was the promise and hype, what was the disconnect with reality, and what triggered the ‘oh shit, this is krap’ moment for all.

Of course, at the top of our list is Theranos. But there were other companies who claimed big, grew fast, became famous, and then bombed.

Is this just failure to deliver or is there a more insidious problem at work? Erin Griffith wrote an insightful article on fraud in Silicon Valley. She writes about a long list of companies who took investors along for a ride, with a mix of bluster and swagger, often with catastrophic side effects to the industry and the people invovled.

And part of me wants to believe that it’s deliberate fraud. But I like to give the benefit of the doubt, and think that what comes into play is a wishful thinking that then gets locked in and forces the company to claim the wishful thinking is true. Kinda like a white lie turning into a smoking black grease of a lie that sticks to everyone and everything and can’t be removed.

I’ve seen it up close.

An antidote to this potential fraud is actually proving your solution works as advertised. No, it’s not enough to have customers, as they can also be hoodwinked by the hype; keep in mind Theranos had a customer: Walgreens, not shabby. No, it’s not enough to have good funding; Theranos had solid funding, though from many folks with no experience in healthcare. No, it’s not enough to have your own secret data proving it works, you need to be able to show it to others, transparently.

In short, the proof of the pudding is in the tasting. If no one can taste it – you get what I mean.

Prove it
Lisa Suennen, who has a good eye for healthcare investments, wrote a great article on health startups declaring:

“the digital health theme for 2017 should be: you show me the evidence it works, I’ll show you the money!”

In the article she points out the trends in health investment (less dollars for more companies), consumer trends (not favorable), and the value these health companies have provided investors (still to prove).

One area she discussed revolved around there being so many companies trying the same thing:

“I would love to see a lot less of companies that are “me too” and a lot more of companies with unique solutions to underserved problems.”

I have often mentioned that folks are focusing on the big three (obesity, diabetes, cardiovascular health) to the exclusion of other areas, such as poverty, access, mental illness, and addiction. How many fitness band companies can the market support? And why is it that none of them are making any headway?

But the article on the whole is about how investment in healthcare gadgets has seemed to be about claims and shiny devices, with little proof of effectiveness.

“I think that the convergence of IT and healthcare is here to stay and the trick is making it useful not cool. Trendiness does not equal value. Technology does not equal good.”

“I’d also like to hear some evidence of how all of this big data/AI/machine learning work is resulting in actual activity to change physician and consumer behavior, particularly around improved diagnoses and avoidance of medical errors. So far most of the talk has been about technology and too little of the talk is about results.”

Creative distraction
Eric Topol, a big booster for the use of digital tools to transform medicine, actually has a healthy dose of skepticism when approached by companies making bold claims. In a recent interview, not only does he raise his eyebrows in doubt, but admonishes Forward, a healthcare startup with a coterie of notable investors, to prove their methods and technology. He was baffled with all the PR glitz and saw some things that just don’t make sense, especially because he basically knows all the tech that’s out there.

“I would be firstly interested in what new tools they are using because are they proven, are they validated, are they well-accepted, and moreover I am particularly interested in publishing results to show that this gadgetry is helping these people,” he said.

What’s interesting to note, is that in the article, he also mentions his ‘prove it’ he gave Theranos’ Holmes when she approached tested him. He was impressed, but pushed her to do a head-head comparison with established tests.

“If you want to be an outsider and be a disruptor of healthcare you are still held accountable to the same standards of ‘You got to prove it.’ One of the things is that if you have technology that’s not proven, everyone assumes that it’s harmless but it could actually be harmful when you get incidental findings or if you come up things that are not true.”

Put the lime in the coconut!
I claim that none of this is surprising. Investors partly have wishful thinking. But also, they partly have no idea what they are investing in.

Theranos had that ‘maverick’ Jobsian feel to it, trying to disprove that “only good science, led by medical professionals, backed by data and able to withstand review by outsiders, can succeed.” At some level, that is true. I don’t think you always need medical professionals (don’t flame me). But you always need good science. As this article is kind enough to note through comparing Theranos’ go-to-market strategies with two others, you need to show evidence! Prove it!

If you are going to claim that your baby monitor catches SIDS, then it better. No wishful thinking can change the truth. And you are putting a lot of children at risk. Oh, someone already did this and the FDA isn’t happy.

If you’re going to be used by folks making sure they are not too inebriated to drive, you better be accurate. Oh, someone screwed up and is being punished.

If you’re going to claim that consumers want to measure their activity, you better be able to articulate why someone wants to measure their activity. Otherwise, you’ll not be able to last. Oh, FitBit isn’t doing so well.

Digital snake oil
This sobering reality is not recent. FT wrote about this early last year. And my skepticism with the use of devices in healthcare is well documented, for many years.

Smartwatches, activity sensors, whiz-bang care models that are more flash than substance – this is the new era of digital snake oil and the only way we can get through this is by having everyone transparently prove their value.

Note, I don’t mean to say all of this area in healthcare is digital snake oil (as others have claimed). But we all need to be vigilant and demand proof for every claim.

Let’s make 2017 the Year of “Prove It”.

What do you think?

Image from hirotomo t

Fascinated by fake news: AI, content, and being human

All the hubbub around fake news and the presidential elections got me thinking of AIs; how we find, review, and share content (a long term topic for me); how human trust and belief hinge upon millennia-old social strategies. And I’ve read a bunch of articles around how fake news has set off a bunch of navel gazing, soul searching, and finger pointing.

A BuzzFeed News analysis has identified the 50 fake news stories that attracted the most engagement on Facebook this year. Together they totaled 21.5 million likes, comments, and shares. Of these stories, 23 were about US politics, two were about women using their vaginas as murder weapons, and one was about a clown doll that actually was a person the whole time.

Source: The top fake news stories of 2016 were about the Pledge, poop, and the Pope – The Verge

Human in the machine
I like to consider myself a relatively seasoned netizen, one who has developed a few habits to fend off spam, phishing links, and disreputable content on blogs and social media. With respect to fake news, I’m a sceptic already, even questioning news from legitimate sources, so I think what’s between our two ears as regular humans is a good start for getting savvy to fake news.

Indeed, Kyle Chayka, in the Verge, has a thorough article showing aspects of fake news that stand out stylistically online. Alas, he also points out that these stylistic differences persist, partly due to Google and Facebook stylistically homogenizing all news in our preferred mobile interfaces, but also because the fake news providers do not see any benefit to improving their stylistic features. Though, this would certainly change if there were any positive impact to these stylistic features.

When enormous, undiscerning platforms like the two tech giants hoover up content, they disguise it, no matter the source. It doesn’t have to be that way.

Source: Facebook and Google make lies as pretty as truth – The Verge

With that ‘analytics between the ears’ sort of spirit, Google and Facebook think they can solve this problem (urgent for them, since they are the primary vehicles for fake news and fake news stylistic homogenization). Facebook has toyed with editorial boards, human moderators, and upping their objectionable content process to include fake news. Seems like Facebook is making a concerted effort to also include existing fact checkers, being more visible labeling suspect content, and tweaking their ad model to reduce click-bait incentives.

Facebook is inherently a human-based business, so good to see them including humans in the process of tackling fake news. Google, on the other hand, is the big SkyNet, AI in the sky. Their take on fake news is better algorithms, not always with a decent outcome.

Ghost in the machine
There is a business model behind fake news and changes to the playing field will lead to changes in the look and feel of fake news, so long as the business model supports those changes. Therefore, we’re in for an arms race.

Fake news fighters are up against determined and smart individuals who will eventually use AI to keep ahead of anti-fake news systems in a battle worthy of a Turing Test. For example, altering images is an old technique, but what happens when AI can make convincing image (and audio and video) manipulation happen at a large and overwhelming scale (smile)?

Oh, you say, but AIs won’t be able to write the fake news itself.

Wrong.

Already many legal notices, sport scores, and other semi-formatted content are being algorithmically generated for online publication. Even AI novices can create a content generator. I had mentioned before how there’s a whole field to computational literature and how we are applying AI to creative endeavors we think are uniquely human. What happens when AIs create fake news that look stylistically real, sound real, and claim things that seem real?

Evil in the machine
Fake news isn’t going away any time soon. Hackers will take over legitimate channels to spread fake news. There are a ton of very big elections coming up in Europe, and fake news is already rearing its ugly head. And while Facebook and the German government are working hard on the legal, professional, and technical aspects of combating fake news before the elections, how do you counter sources of fake news outside the legal structure, outside your borders? How do we filter the signal from the noise, separating the good from the bad? We struggle with this filtering even when sources are named and content is brief.

Us in the machine
I truly believe that the social skills we have to be skeptical, deal with claims, and understand information extend well into the online world. The challenge is to maintain the cues and context we are accustomed to use IRL and map them and make them useful online (I explored an aspect of this in my ramblings on noise posts almost 10 years ago).

The online world has scaled up our capacity to create content and communicate. What hasn’t scaled is our ability to grok it all in the way we’d do face to face. And the social ties that bind us, inform us, and provide context have been frayed or blurred online, making judgment calls even harder (just witness the echo chambers reinforcing themselves on Facebook).

To me, the break down we see with fake news is the gap between who we are as social beings and the tools we use online. The challenge is to take fake news as an opportunity for us to reassess what we do online, how we continue constructing the layer of humanity that is the online world, and how we use the online world as social beings endowed with certain unalienable social abilities.

Image by Christopher Dombres

Where were you 10 years ago today?

On Jan 9th, 2007, I was in London, at the IDEO offices, sitting in one of their conference rooms with a bunch of Nokians and IDEO-ans. I do not recall if we were streaming the audio or refreshing a page of someone live blogging from the iPhone launch event [Update: Matt Miz says live streaming.].

We knew the phone was coming. But it was a momentous evening for us, nonetheless. I think we all knew it was the death-knell for Nokia if it couldn’t match what Apple was bringing to the table. We also cynically shared what we thought the executives’ reactions would be. In those reactions was a hint of the fear and the hubris that Nokia Mobile Phones couldn’t overcome.

Alt-history
The reason we were at the IDEO offices was to design a new world, where the internet and the mobile were united.* We envisioned a time when we’d be online with our phones all the time, constantly connected to our people and their content. Nokia was to be the gateway, the interface to a collection of small windows we could peek through or step through, depending on how much we wanted to do. Holding all the morsels of our internet experience together, Nokia would be the essential brand.

But that future never came to be. Though I see elements of what we envisioned spread across the world today.

Ten years on, Google, Apple, and Facebook are losing a grip on their hegemony, much as Nokia did back then. Once more, the players mediating our experience with reality are changing as they offer us new ways to connect, create, share, trade, and transport.

We knew Jan 9th, 2007, would mark a deep line in our lives. Alas, I haven’t seen anything since that has had that kind of built up expectation, reception, and potential impact. Ten years from now, what will we all be looking back to in 2017 as the deep shift in the future as we thought it would be?

Image from Kim Støvring

*Indeed, we were there for the kick-off meeting for the project I was to lead, as it was my vision, for those who care to know, or who have tried to forget.

Select quotes from the conversation on AI between Barak Obama, MIT’s Joi Ito, and WIRED’s Scott Dadich

I finally got around to reading this interview transcript. Dadich moderates an interesting discussion between Ito and Obama mostly around AI, but also touching on related issues around the impact of new technologies on business and people.

Below, I’ve excerpted some of the more interesting things that were discussed. Please take these excerpts as a reflection of what I am thinking about and worthy of further exploration.

AI in general, AI in particular
Here’s a man who has to keep the whole world in his head, and he’s really articulate about AI – where it is, where it’s going, what are the impacts, what are the benefits.

There’s a distinction, which is probably familiar to a lot of your readers, between generalized AI and specialized AI. In science fiction, what you hear about is generalized AI, right? – Obama

Obama rightfully points out that specialized AI, is being used everywhere today. But the AI from sci-fi, generalized AI, and the AI that everyone fears, is a long way away. Nonetheless, having that broader fantastical view gets us thinking of the implications AI has in all aspects of our lives, especially as it comes to how we deploy specialized uses of AI.

And, like a president should, while Obama sees the “enormous prosperity and opportunity” AI presents, he is also concerned with the impact AI can have on jobs and wages as certain things are automated by AI.

Ito calls for AI to be really called Extended Intelligence. This is a great term to describe what I have said before, that AI should augment humans, not try to replace them. And indeed, many jobs that have been more cognitive will be disrupted by AI. How we choose the balance between full or augmented automation will impact those jobs and people.

Low-wage, low-skill individuals become more and more redundant, and their jobs may not be replaced, but wages are suppressed. And if we are going to successfully manage this transition, we are going to have to have a societal conversation about how we manage this. – Obama

AI culture
I’ve had the sneaking suspicion that in the past 6 months, AI has gone from being in the background, to being front and center. As Ito articulates best, “this is the year that artificial intelligence becomes more than just a computer science problem.”

So it becomes important that the creation of AI have cultural and societal sensibilities. Yet, as Ito points out, “it’s been a predominately male gang of kids, mostly white, who are building the core computer science around AI, and they’re more comfortable talking to computers than to human beings.” How do we become more inclusive in adding values to AI, ethical AI? And what is the role of government?

Obama also mentioned that his concern wasn’t a runaway AI, but someone empowered by AI to do malicious things. Now the cyber security game just got more complicated. Interestingly, his view is not the usual ‘build a wall’ but the attitude of viral pandemics, a public health model – build a system that can rapidly and nimbly respond to an outbreak.

I think there’s no doubt that developing international norms, protocols, and verification mechanisms around cybersecurity generally, and AI in particular, is in its infancy. The challenge is the most sophisticated state actors don’t always embody the same values and norms that we do. – Obama

And where should AI research come from? Ito points out that a lot of AI research is coming from huge commercial research labs. Obama mentioned how these business want the bureaucrats to back off and let chase AI. But he then pointed out the benefits of inclusion of the public and the government in big technological advances.

I think we’re in a golden period where people want to talk to each other. If we can make sure that the funding and the energy goes to support open sharing, there is a lot of upside. You can’t really get that good at it in a vacuum, and it’s still an international community for now. – Ito

Jobs
AI is all about automating intelligence. The industrial revolution was trasnformed with the automation of factories. AI will displace jobs, but, as Ito points out, “it’s actually nonintuitive which jobs get displaced.” We have already seen paralegal roles being taken over by text scanning systems. What will happen to lawyers, doctors, or auditors? How will AI take or transform their roles?

Both Ito and Obama talk about how these changes in jobs might require a redesign of the social compact – how to we value contribution and compensation?

What is indisputable, though, is that as AI gets further incorporated, and the society potentially gets wealthier, the link between production and distribution, how much you work and how much you make, gets further and further attenuated—the computers are doing a lot of the work. – Obama

We can figure this out
At the end of the interview, Obama mentions space exploration, which leads to using Star Trek as a guide for humanity’s future. Obama, ever the optimist, points out that Star Trek was not about science fiction but about values and relationships, “a notion of a common humanity and a confidence in our ability to solve problems.” He sees the spirit of America being “Oh, we can figure this out.”

Taking Star Trek further, Ito mentions, that the Star Trek Federation is “amazingly diverse, the crew is diverse, and the bad guys aren’t usually evil—they’re just misguided.” It is clear, this is an world the two of them are always working towards.

A thought
I am not surprised that these two great thinkers who have great hope in humanity, should gravitate to concepts such as cooperation, empathy, caution, and optimism. I, too, am an optimist, and have faith that the good in humanity will always prevail. Though, that faith require I remember to take a long term view, a view I am sure that guides these two men, and understand that there will be temporary moments of despair where it seems we are not heading in the right direction.

Geez, I wonder why I feel that way?

Go read the full article and see the video and let me know what you thought.

 

Fitbit buying Pebble will NOT help it crack the code on smartwatches

For some reason, discussions of smartwatches make me twitch. Maybe it’s because I got my first smartwatch over 10 years ago.* Maybe it’s because I’ve watched “the next great category” of mobile devices come and go or fling themselves repeatedly on the rocks of disappointment. Maybe it’s because I’ve played with sensors, data, mobiles, and wearables for a long time and have not seen anyone “crack the code.”

OK, call me a cynic and a curmudgeon. Yes, there are many others in the industry who (should) know more than I do. Though, I don’t see anyone really “getting it.” And, admittedly, I don’t like doing the “I can tell what’s not right” thing, rather than the “let me help you to the right place” thing of figuring out where the fusion of sensors, mobile, and wearable devices will head (though I do have many inklings).

A pebble in your shoe
The Fitbit CEO says they bought Pebble to help them crack the code on smartwatches. He says:

“We don’t think there’s been any product out there in smartwatches that combine general purposes, functionality, health and fitness, industrial design, and long battery life into one package.” [from: Fitbit CEO says buying Pebble could help it crack the code on smartwatches, The Verge]

Does he mean that not even Pebble has cracked the code? Because if Pebble hasn’t, then buying them won’t automagically impart the ability to crack the code to Fitbit.

In any case, the CEO of Fitbit is looking in the wrong place. I do not see anyone who has all the pieces in place to actually crack the code on smartwatches.

The future is here but unevenly and all that jazz
I mentioned I got a smartwatch over 10 years ago. It was a Suunto T6 (pictured), which connected to a heart rate band and a foot pod accelerometer (I didn’t buy the GPS pod because I already had a GPS pod for my phone). Suunto hasn’t stopped making, what they called, wrist-top computers. Nor have Garmin or Polar, Suunto competitors since that time. A good example of how these watches have evolved is, my favorite, the Garmin Fenix 3.

What lessons can Fitbit learn from Garmin, Suunto, Polar who have knocked it out of the park with smartwatches (ugh, “smart” is so stupid, can we just call then “watches” fercryingoutlout)?

True, Fitbit, and all the others, want to hit the large “consumer” market. Since Fitbit is obsessed with measurement, they mean “consumers” are all those people who don’t HAVE to measure themselves, such as the chronically ill, or those who aren’t DRIVEN to measure themselves, such as athletes or QSers.

Fitbit and peers seem to be proposing WHAT folks should measure and HOW. But their lack of success in getting traction with those consumers suggest that these WHATs and HOWs do not match the WHATs and HOWs that capture the consumer market.

By focusing on a driven segment, Garmin, Suunto, and Polar have been able to hone their offering to their customers and prove that watches with a lot of computing power and location awareness are something a segment of folks will pay for and keep using.

What’s the equivalent catch that will match what Fitbit and Pebble bring to the consumer segment with what the consumer segment really wants out of these devices?

Sales flop
We all know that these devices – from the fancy Fitbit pedometers to the expensive Apple watches – are not holding folks’ attention, especially when compared with the rabid attachment folks have for their phones. Everyone likes to track the sales of these devices that Fitbit and others are churning out. Why are we not talking about usage rather than sales?

Back in my day, Nokia wasn’t only bent on selling phones, but also thrilling the user so that the devices would drive up ARPU (average revenue per user, aka meterable usage) for carriers and a repeat purchase of a phone. Device success wasn’t just tied to sales, but usage and repeats sales.

What’s the equivalent of ARPU for Fitbit, Pebble, Apple, and others? What’s the churn? What’s the repeat purchase for subsequent models?

Can someone find me those metrics?

I’m not the first to grumble about this. There are been articles (here’s one from 2015) and analyst reports (PDF report from 2014), on churn and usage for some time.

Why don’t the vendors report these metrics?

Wooden strategy
The Palm had its humble origins in a wooden block that the designer carried around to capture how he would use a mobile handheld computer. Who is doing the equivalent to Palm and the wooden block, but with watches?

I have no idea how Fitbit and others are actually designing their mobile devices. But from what I see, there are folks approaching wearables from the device and sensor perspective, pushing the product promise around steps, accuracy, sleeping measurement, heart rate sensors, and so forth. Another group seems to approach wearables from the data perspective, focused on showing data galore to users.

The answer lies somewhere in between, where the success of wearables will be in the fusion of data, devices, and, most importantly, in how the user experiences that data and those devices. Hence, nobody seems to have the right go-to-market approach. Most of the vendors focus on the data and the device, the apps and developers, not the core human need that would get someone to buy it in the first place, that is, a need that is relevant to the general consumer.

Taking the measure of Fitbit
Fitbit (and I feel all the others) are using measurement as the main draw of all their watches and gizmos. Measurement is what folks who are DRIVEN or folks with chronic conditions HAVE to do. But is that what the general public wants out of a device they carry with them everywhere?

Fitbit, based on the quote above, has missed the trick if they want to get into the general consumer world of watches. And I know what happens when device manufacturers can’t think beyond their device features.

If you want to make a digital device on someone’s wrist absolutely essential, it’s not going to be due to wiz-bang sensors or measurement, or fantastical dahsboards or indicators of my steps or fitness.

A digital watch will be essential when it helps me work better, be better, communicate better, know better, feel better, get through my day and relationships better.

Ah, of course, we already have that digital device – it’s our phone.

My challenge to you
Put a frakkin’ wooden block on your wrist. Tell me why you look at it or want it to do something as you go about your day. How does it complement the things you carry, such as your keys, phone, and wallet – the things you check for before heading out the door, the things you would turn around for and go back home to get?

If anyone is studying this, let me know. If you think I’m full of krap, let me know.

Until then, I’ll be a curmudgeon, twitching every time someone thinks they can “crack the code” around smartwatches.

 

*Hey, if you go “WTF, the T6 isn’t a smartwatch.” Of course, by today’s expectations.  That’s like saying the PowerBook 100 wasn’t a laptop.

What is 777labs?

777labs_consultancyFor the past 20 years, I have been helping folks in marketing and sales identify, target, build, and nurture customer relationships, market opportunities, and brand growth. I have either led or heavily influenced sales strategies, marketing efforts, or solution design and development, giving me a unique perspective as to how strategy and execution cut across key areas of an organization and affect their customers.

My goal is to make this experience available through 777labs. I want to help my clients build an engagement strategy, whether the customer is another business or a consumer of a service or product. And I want to help build the content that enables the client to deliver on that strategy, be it sales content to provide the sales staff competency and credibility, or clever tweets and blog posts.

This is what I have been doing for decades, and this is what I enjoy doing.

A list what I offer
Marketing: Digital marketing strategy, Content strategy, Social media strategy, Marketing strategy, Marketing content, Brand building, Marketing analytics, Community management

Sales: Customer engagement strategy, Sales strategy, Sales content, Sales training, Sales analytics

Solution design: Mobile service design strategy, Web service design strategy, Product and solution marketing, Solution design strategy, Data enrichment strategy

Healthcare, in particular
While I can do these things for companies in practically any industry, I’d like to focus on one industry I have extensive experience in: healthcare. I’m particularly interested in providing guidance to clients who are not traditional healthcare companies, but who are building a healthcare vertical or are interested in figuring out how to enter the healthcare market.

Contact me
If you are a company looking to take your product or service into healthcare, or you want to grow your digital health or patient engagement activities, 777labs can help. You can contact me, Charlie Schick, at firstname.lastname@777labs.co.

Pause for station identification

aut-viam-inveniam-aut-faciam
“I will find a way or make one” – on my Harvard University chair kindly given by Gary Silverman on my departure from his lab

Through the years, each of these pauses have been a definition of where I am at in that sliver of time. Alas, currently, I’m exploring a few potential paths, so defining where I am in this sliver of time is important to me.

So here we go.

Me
Hello. My name is Charlie Schick. I’m passionate about the intersection of healthcare, mobile, and data; particularly how we can improve the way healthcare organizations engage with customers, patients, and families. I also advise companies on mobile, marketing, and analytics.

I have 20 years of experience in engaging with customers through various roles in marketing, sales, solution design and development, and research at major brands, such as IBM, Nokia, and Boston Children’s Hospital. Also, I have been influential leading these major brands with innovative ways of engaging with customers, particularly through digital solutions.

What I’m doing now. Again.
My first gig out of the lab was my own company, Edubba, providing editorial consulting – running proto-blog sites, being a columnist for some magazines, providing wordsmithing for product reviews and marketing material.

That independent effort quieted down when I moved to Nokia, though I did keep working on the side – writing feature articles for organizations, a biz plan here or there. The bulk of my writing and strategy work in the past 20 years, though, has really been corporate –  the Beagle; Hello Direct; the Nokia Cloud project; the Nokia corporate blog; Children’s Facebook page and blog; sales consulting and occasional writing for IBM; trying to make a difference at Atigeo.

Consultant-reborn
Now that I am on my own, again, I’m going back to my first job out of the lab. I’m launching a new consultancy, 777labs. This time I will have a broader scope than before, tapping into my many years of experience in the corporate world, and relevant to where I want to make an impact.

777labs is a customer engagement strategy consultancy helping clients identify, target, build, and nurture customer relationships, market opportunities, and brand growth. Our services cut across sales, marketing, and solution design strategy and also include the necessary tools, analytics, and content development. Our primary focus is in healthcare, including providing value to non-healthcare companies who are entering the healthcare market.

I’m excited to get back into leading this work full-time, for myself.

Thinking and speaking and helping
Beyond the new consultancy, I want to continue giving talks and running panels. I regularly speak in front of large audiences, sharing my experience and interests through various forms of media and design, and in the office of CxOs. Send me a note if you want to know more.

And of course, my standard disclaimer
(riffing off of an ancient Cringely disclaimer)
Everything I write here on this site is an expression of my own opinions, NOT of any of my clients. If these were the opinions of my clients, the site would be called ‘777labs’ client’s something or other’ and, for sure, the writing and design would be much more professional. Likewise, I am an intensely trained professional writer :-P, so don’t expect to find any confidential secret corporate mumbo-jumbo being revealed here. Everything I write here is public info or readily found via any decent search engine or easily deduced by someone who has an understanding of the industry.

If you have ideas or projects that you think I might be interested in please contact me, Charlie Schick, at firstname.lastname@molecularist.com; via my profile on LinkedIn; or via @molecularist on Twitter. And if you’re interested in working with 777labs, you can contact me at firstname.lastname@777labs.co.

Peanut butter and chocolate moment: AI goes great with…?

1849953350_79809bd7e6_zI have an ideation game I play called “Peanut Butter and Chocolate.” Basically, it’s mashing two seemingly unrelated things to think of how they would go together (I’m sure others have a similar technique). For example, most recently, we wondered about toilet paper (everyone needs toilet paper) and how it might go with religion (very popular) or 3D printing (also popular, though not as much as toilet paper or religion).

So, as is evident by the title of this post, what happens when we add AI to something? For me, I turn to two areas that are never far from my mind: healthcare and mobile.

Healthcare
I have seen machine learning being used to develop better models around readmission (yawn, isn’t it always readmissions?). What I’d like to see are more optimization solutions, such as optimizing staff, equipment, or drug usage. Or how about helping patients choose the best health plan based on their medical and resource usage history (this is a dear one to me).

Another area where I would like to see AI applied is behavioral health – can we help patients manage their mental health, what can we provide caregivers to better manage relapses or even violence? I think we spend so much time on the Big Three – heart disease, obesity, diabetes – that we fail to hit in places that are not getting attention, such as mental health, geriatrics, or the impact of poverty on health.

Though I always come back to my original concern with AI in healthcare – will it ever be better than a good nurse armed with some good data? Watson, what’s your comment on this?

Mobile
I think back to my early years in mobile and how I used to talk about the mobile lifestyle. The success of AI in mobile will also be related to how it flows in with the mobile lifestyle. Though I think these days folks are a bit more savvy with mobile than way back when.

But there’s been an inordinate amount of focus on speech-driven agents that are really clever assistants. Yes, I am looking forward to agents talking to agents to schedule meetings, booking tickets or restaurants, and the like. Yet these agents require me to stop what I am doing and talk to them, breaking the mobile flow.

I want AI to recede into the background. I don’t want to tell the AI what to do, it should know. For example, when I schedule a meeting, don’t just tell me about the participants, but learn from me what is the usual info I collect before a meeting and summarize it for me. Or, learn from me what I like to know at the start of the day and summarize that for me. Or pay attention to what I am doing and where I am and make sure I get things done, based on my email or based on my calendar.

OK, so I am not so clear on where AI can go in mobile, but I do see we need to get beyond our fixation with bots and speech-driven agents.

Have you seen anything interesting around AI in mobile?

Image from Graham Hellewell

When AI is Artificial I: humanity, culture, art, emotion

chateau_de_versailles_salon_des_nobles_pygmalion_priant_venus_danimer_sa_statue_jean-baptiste_regnaultDoesn’t all AI end up being a reflection of who we are as humans? From the practical, I mentioned bias in how we build AIs and the prevalence of conversational bots. But we all know of the endless numbers of books and movies with stories of AI becoming something we cannot distinguish from humans.

Is this simply the Pygmalion in all of us? We turn to external expressions to make sense of the human condition, through art, religion, science, sport, politics. Why not AI? And with so much expression imbued in an AI, might we not fall in love with it, or want it to be something we can fall in love with?

Cre-ai-tivity
I’m not going to go over all the examples of AI in art. But I would like to point you to a very interesting short movie written by an AI. Fed a corpus of sci-fi scripts, the AI, given a seed of an idea, wrote a short sci-fi script of its own. The video is the director’s and actor’s interpretation of the script.

The interesting thing is that it comes off as an off-beat movie, but with a touch of something deep that must be there. And if you think the dialogue is too off-beat, read something like the Naked Lunch, or Kafka.

And here’s a recent article on a performance of various pieces from various genres of music written by AI but performed by humans. This sparks a very interesting discussion on the balance between statistically creating music (the AI) and the human touch. The example used in the article is a pair of Mozart pieces – the one that’s all AI is all over the place, but the one with a bit of human intervention begins to have small stretches that feel like Mozart. But, of course, a fully Mozart-style piece does not emerge from the machine.

Though, one of the composers see that AI as a collaborator rather than a composer on its own, and that’s what is exciting to some musicians.

He points out that although the music sounds like Miles Davis, it feels like a fake when he plays it. “Some of the phrases don’t quite follow on or they trip up your fingers,” he says. This makes sense, as this isn’t music written by a human with hands sitting at a keyboard; it’s the creation of a computer. Artificial intelligence can place notes on a stave, but it can’t yet imagine their performance. That’s up to humans.

Source: A night at the AI jazz club – The Verge

My Fair AI
I think we approach the Ultimate Pygmalion in our desire to create simulacra of emotive, interactive beings. For example, there is no end to the wee AI-imbued gizmos we try to create to interact with us. Will these gizmos be as smart as a puppy, or try to do more and end up annoying? Anki’s Cozmo is the latest I’ve seen and a lot was put into the emotional intelligence of the toy.

And then there’s this very interesting story about a AI bot maker who lost a dear friend and used the texts her friend left behind to create a conversational memorial to him. The author of the article is sensitive to the emotional impact of this AI memorial, but also branches off into the areas of authenticity, grieving, personality, and the role of language.

Art is meant to get us to think about who we are as humans. The bot creator only wanted to build a digital monument to have that one last conversation with a dear friend. Yet, she touched a nerve that we could not have touched without her skill in AI and capturing a voice. Rather than create something that helps us do something or cope with something, her digital monument brings up many thoughts on humanity, culture, art, emotion. Should we build bots grounded in real personalities, as derived from their digital textual contrails? What happens to one’s voice when one has died? If our voice can persist, what does it mean to who we are, our mortality, to the ones we leave behind?

What do you think?

Image Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons, from WikiCommons

Come down to earth: some hidden truths about AI

13920528727_a03087a1d3_zYou know that a tech trend is growing when there are more conferences and training programs than you can shake a stick at. And also, the trend is picked up by the amazing Science Friday and you get to hear some interesting developments and future direction.

One thing that you really don’t hear often are the “hidden truths.” The Verge recently wrote a very nice article highlighting three places where AI falls short – training, the bane of specialization, and the black box of how the AI works.

Machine learning in 2016 is creating brilliant tools, but they can be hard to explain, costly to train, and often mysterious even to their creators.

Source: These are three of the biggest problems facing today’s AI – The Verge

I had the good fortune to work with some very talented data scientists who were regularly using machine learning on healthcare data to understand patient behavior. Also, at IBM, I was able to learn a lot about how Watson thought and how well it worked. In all cases, the three hidden truths that The Verge had commented on were evident.

Teach me
The Verge article starts by pointing out the need for plenty of data to be able to train models. True. But for me, the real training issue is that it’s never “machine learning” in the sense of the machine learning on its own. Machine learning always requires a human, for example to provide training and test data, to validate the learning process, to select parameters to fit, to nudge the machine to learn in the right direction. This inevitably leads to human bias in the system.

“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitation,” said University of Utah computer science researcher Suresh Venkatasubramanian in a recent statement.

Source: Computer Programs Can Be as Biased as Humans

This bias means that no matter how well created or how smart, the AI will show the bias of the data scientists involved. The article quoted above references the issue in the context of resume scanning. No, the machine won’t be less biased than the human.

Taking that thought further, I am not concerned only with bias, but that possibly the AI cannot be smarter than the human, using the methods we currently have. Yes, an AI can see patterns across huge sets of data sets, automate certain specific complex actions, come to conclusions – but I do not think these conclusions are any better than a well-trained human. Indeed, my biggest wonder with machine learning in healthcare is whether all the sophisticated models and algorithms are no better than a well-trained nurse. Indeed, Watson really isn’t better than a doctor.

But that’s OK. These AIs can help humans sift through huge data sets, highlight things that might be missed, point humans to more information to help inform the human decision. Like Google helps us remember more, AIs can help us make more informed decisions. And, yes, Watson, in this way, is actually pretty good.

The hedgehog of hedgehogs
The Verge also points out that AIs need to be hyper-specialized to work. Train the AI on one thing and it does it well. But then the AI can’t be generalized or repurposed to do something similar.

I’ve seen this in action, where we had a product that was great in mimicking medical billing coding that a human could do. After training the system for a specific institution, using that specific institution’s data, the system would then perform poorly when given data from another institution. We always had to train to the specific conditions to get useful results. And this applied to all our machine learning models: we always had to retrain for the specific (localized) data set. Rarely were results decent on novel though related data sets.

Alas, this cuts both ways. This allows us to train systems on local data to get the best result, but it also means we need people and time (and money) every time we shift to another data set.

This reminds me of Minsky’s Society of Mind. Often we can create hybrid models that provide multiple facets to be fitted to the data, allowing the hybrid collection to decide which sub-models reflect the data better. Might we not also use a society of agents, a hybrid collection of highly specialized AIs that collaborate and promote the best of the collection to provide the output?

Black box AI
The third and last point the Verge article makes is about showing your work. I’ve been in many customer meetings where we are asked what are the parameters, what is the algorithm, how does the model think? We always waved our hands: “the pattern arises from the data,” “the model is so complex, it matches reality in its own way.” But at the same time, the output we’d see, the things the machine would say, clearly showed that sometimes the model could approximate the reality of the data, but not reality itself. We’d see this in the healthcare models and would need to have the output validated and model tweaked (by a human, of course) to better reflect the reality.

While black boxing the thinking in AI isn’t terrible, it makes it unapproachable to correct any misconceptions. The example in the Verge article on recognizing windows with curtains is a great one. The AI wasn’t recognizing windows with curtains, but correlating rooms with beds with windows with curtains.

AI is not about the machine
The human is critical in the building and running of AIs. And, for me, AIs should be built to help me be smarter, make better decisions. Some of the hidden truths listed above become less concerning when we realize we should, for now, stick to making AI as smart as a puppy, rather than imbue them with supposed powers of cognition beyond the human creators. AIn’t gonna happen any time soon. And will only annoy the humans.

Image from glasseyes view