I recently came across an important study published by the Blue Cross Blue Shield Association. Based on the Blue Cross Blue Shield Health Index, the study found a direct link between a population’s health and a growing economy, higher incomes and lower unemployment.
The results hit home for Lumiata. Health matters, at a micro and macro level. Our team believes that if we can empower people to change health trajectories for the better, we can make a difference in this world. But doing that at scale means understanding and handling the complexity of health data, and being able to predict health with precision–it’s no small feat and demands innovation. It’s why we are building predictive and prescriptive artificial intelligence (AI) for healthcare. Through our work with Blue Cross Blue Shield (BCBS) health plans, we are predicting individual risk of disease with the precision and clinical nuance needed to improve care and lower costs. We make our predictions actionable by delivering precise, targeted lists that pinpoint who is at greatest risk of chronic disease, and provide the medical reasoning behind why we think they are at risk, driving better clinical engagement between payers and providers as they take next steps.
The health challenge we face demands practical innovations. The Blue Cross Blue Shield Association’s study and the Health Index is an important reminder of this, and a call to action. It demonstrates how the right use of analytics has the potential to improve lives at scale, and mobilize communities and stakeholders to take action toward better health delivery. That’s what drives us at Lumiata.
It’s why I’m excited to be part of a panel at the upcoming BCBS National Summit, titled, “Data Analytics Across the Healthcare Value Chain” on Wednesday, May 10th at 10:30am, which is organized by the Blue Cross Blue Shield Association. I see their study as an important reference point. I’ll be joined by other visionary entrepreneurs supported by the BCBS Venture Funds in a discussion about innovations in analytics, including John Donahue–CEO and Chairman of axialHealthcare, and Rodger Desai–CEO of Payfone. I look forward to discussing Lumiata’s early results in applying AI for chronic disease prediction.
By: Ash Damle, Founder and CEO
Rarely does the word “artificial” have a positive connotation—artificial sweetener, artificial food dye, artificial meat, etc… So, why should we trust Artificial Intelligence?
Even though I’d like to picture AI in healthcare as a friendly, marshmallowy robot, this probably won’t happen (at least, not at first). Instead, AI in healthcare will be developed across multiple subsets of the healthcare industry, from drug discovery research and treatment optimizing plans all the way to being able to recognize cancerous tumors in various health screenings.
Now, I know what you’re thinking--humans can already do all that, what makes a computer better at it than us? Well, right now, AI isn’t necessarily better, but it’s like that extraordinarily gifted kid in school who everyone knows is headed for great things. With such promising potential, everyone in healthcare is keeping a close eye on AI and how it’ll be maximized in the coming years...
(Click HERE to read the rest of the article by QuHarrison Terry.)
If there’s any process data analysts, data scientists and actuaries would want to automate, it’s likely data preparation.
Data preparation is a cumbersome and laborious process. In many cases, data analysts can spend up to 60 percent of their time just preparing their data before they can use it to drive business decisions. Sound familiar? We have all been through the data preparation nightmare! This number definitely rings true for Emily–a data analyst we know at a health plan–who is inundated with gigabytes of new data every month, and loses precious time because she has to wrestle with dirty data. In an industry where data sets are extremely complex with high dimensionality, and where timely and precise decisions and interventions are crucial, the time spent on just combining and cleaning data becomes costly, both clinically and financially.
We need a better way. Health plans need the ability to put their data to work within the smallest possible timeframe so they can make faster, more precise business decisions that improve their members’ health, reduce costs and help them manage risk.
At Lumiata, we are motivated by the urgency to make sense of health data for the “Emilys” of the world. To reliably drive value, data needs to be corrected, standardized and contextualized. We transform raw data into enriched, standardized and longitudinal records for each individual member so our customers can put their data to work toward analysis and action faster.
Here’s what that means for someone like Emily:
The Need for Correctness: Raw Data Ingestion and Integrity Analysis
We take the time to understand the health plan’s data, and assess how it serves the analysis we conduct. The first step in our Data-as-a-Service (DaaS) process is to ingest Emily’s data as-is from multiple sources, including claims, labs, EHRs and unstructured data. This data is almost always incomplete and has very high dimensionality. We perform a comprehensive data integrity analysis to understand whether all the minimum required data is available, but also to ensure that the raw data is transformed properly. At the end of this largely automated process, we generate a data integrity report, which can be shared with Emily and her internal stakeholders to help them get a deeper understanding of their own data.
The Need for Standardization: Raw FHIR Creation
For repeatable analytical or AI processes to be applied, Emily’s data has to be cleaned and standardized. Our proprietary ETL process automatically corrects formatting issues and identifies missing content. We transform it into standard, validated FHIR bundles (Fast Healthcare Interoperability Resources)–the emerging standard for medical data representation, exchange and interoperability.
The Need for Contextualization: Data Standardization and Enrichment
Just as humans require and apply knowledge to make sense of data, any AI or analytical process can be enhanced with knowledge that is applied to raw data, thereby enriching it and making it more useful. This enrichment occurs in many forms, such as: identifying and correcting inaccurate and incomplete codes; applying code mapping and abstraction; active ingredient mapping with medications; and lab range interpretations. These processes ensure not just clinical quality of the data, but also clinical value. For instance, Lumiata would correct a 2007 CPT code in Emily’s data to its corresponding 2017 CPT code for both code consistency and ensuring signal identification. Lumiata also maps all ICD9 codes to ICD10 appropriately and applies SNOMED hierarchies to provide additional data abstraction. These kinds of processes enrich raw data for every downstream use.
This rigorous enrichment quickly transforms Emily’s raw, incomplete data into a format that can be meaningfully used by any analytical or machine learning method. We have found that even if the data set consists primarily of just claims data, Lumiata’s AI for Data Prep is able to enrich the data with medical knowledge (from more than 50 million PubMed articles) and experience from other data sets (from more than 60 million patient records) to enable more precise predictions that are interpretable.
Here’s how Emily’s day-to-day is now transformed with just the first part of our DaaS process: she can now start her analysis within hours, not months, allowing her to focuses on strategic business considerations, rather than wrestling with data; she is working with good data that is not old, which gets her past the ‘garbage-in-garbage-out’ pain-point; and because her data is standardized, all her processes are repeatable and she can drive new and additional insights and value through the enriched data.
We want to empower our customers to deliver clinical brilliance effortlessly through their data. It starts with an AI-powered data preparation process that is faster, better, with more efficient preparation. We have found that our process drastically reduces the data latency issues and health plans’ time-to-intervention metrics.
By: Ash Damle, Founder and CEO & Prerna Anand, Product Manager
This morning, one of my largest health technology projects experienced several disasters.
It all started with one piece of key information being left out of an initial patient record data set. This omission resulted in several statistical models being wildly inaccurate. On top of this, the quality assurance team used the wrong validation criteria, and our client-side counterparts ignored several automatic warnings generated by our error-checking algorithms. Frustratingly, this event triggered a timeline extension for the project, which originally needed to be delivered by the end of the month. We immediately undertook emergency measures, placing this project and several others in danger of breaking the high-profile promises of executive champions.
If my team was Mission Control, we just witnessed our rocket explode during the launch sequence.
Fortunately, all of these failures were intentionally planned…on a whiteboard.
All of these tragic events took place during a stress test session of our technology implementation within hypothetical client operations. They were part of a series of internal exercises with a large team of stakeholders to identify risks, anticipate failure, and plan for unanticipated events.
In last week’s post, I focused on the importance of identifying success metrics during pilots and how to recognize success when it occurs. It’s equally important to also focus on the opposite of success.
I’ve helped shepherd dozens of health technology implementations in enterprise settings. These stress test and failure planning sessions are an essential, but overlooked, piece of the strategy planning process. While they may take up valuable meeting time, the benefits of these disciplined risk mitigation efforts far outweigh the nominal time they take to complete.
Every operational plan for new health technology deployment should have a stress testing phase, where every stakeholder aggressively challenges the innovation’s viability. Oftentimes, the difference between a smooth execution and total catastrophe rests in our ability to anticipate and prepare for when things go wrong.
When people discuss an innovation, they focus on the incredible benefits and wonderful opportunities that a new process or technology brings. What is left out of the discussion is mention of the inherent risk associated with transition and implementation. Stress testing should systematically address all of the areas of risk associated with the technology. Everything that can go wrong should be envisioned and discussed in an outline of anticipated failures. Afterwards, rigorous contingency and emergency plans should be put into place.
Conducting Stress Testing and Failure Planning
Essentially, project planners should plan to fail by closely examining every factor that contributes to success.
For predictive analytics, this means creating an inventory of all the things that can go wrong, by looking at the downstream events that would be triggered in the event of a particular problem or failure. An inventory of the resulting consequences should also be created. This can be done through data collection, ETL, model training, analysis, and reporting. For example, if a dataset were to significantly change within the initial population, what implications would this have for the overall analysis reporting? How would an organization respond from a crisis management standpoint?
We’ve challenged our underlying assumptions and conducted sensitivity testing for every potential point of project failure. This effort ranges from re-evaluating the basic innovation premise to creating what-if scenarios. Much of this work may be reused for contingency planning, but it’s primary purpose is to make sure that the initial launch occurs smoothly.
From my team’s perspective, our role as Mission Control is to continually check and re-check the launch. It’s prudent to make sure failure planning takes place, so that none of the hypothetical scenarios never become a reality.
Some basic questions that your Mission Control should ask include:
New technology adoption, especially in healthcare, is fraught with risk in both planning and execution. Non-healthcare industries do not have to contend with the number of stakeholders and multi-dimensional pressures that exist within hospital systems and payers. The implementation of new technologies will inevitably introduce a level of uncertainty that organizations typically do not experience on a regular basis. Stress testing and failure planning are essential tools for mitigating innovation execution risk.
By: Wil Yu, VP of Business Development
Wil Yu heads innovation strategy, business, and partnership development for Lumiata. His work focuses on predictive data analytics, care management transformation, and related emerging health technologies. Previously at the U.S. Department of Health and Human Services, he led nationwide healthcare innovation efforts.
The use of deep learning, machine learning and other artificial intelligence approaches in healthcare is rapidly increasing. Over the next two days, Health Data Management highlights companies bringing a variety of approaches to the use of AI in the industry. Today’s list includes 25 companies; yesterday, we featured 15 vendors with artificial intelligence offerings in healthcare...
(Click HERE to read the full article by Health Data Management.)
I spoke with a good friend this week on how things were going with product testing of a new care management platform. Like many early stage efforts, the technology has been prototyped and is being actively tested in live-settings for further evaluation and refinement. She works in a large technology prime that typically refers to such projects as being in the ‘dog food’ stage, but this story would be the same for a startup or early stage company.
Health IT innovations inevitably go through multiple rounds of focused testing in order to clearly demonstrate the value to a potential customer. Altogether, my friend’s ‘proof-of-concept’ was well organized and was on-track to be completed on-time. There was, however, more than a bit of anxiety when she mentioned the unveiling of the forthcoming closeout results. The final client presentation at the end of the study would generate many useful metrics, but there was ambiguity around exactly how they would be interpreted or received by the client.
For this case, my friend could have greatly benefited from a proof-of-concept strategy like the one that I weave into all my early stage engagements. In each deployment, a set of success metrics needs to be jointly developed and agreed upon towards the start of the project.
Innovation success isn’t guaranteed, but everyone should know upfront what it might look like when they see it.
I’m fairly familiar with these types of proof-of-concept engagements, as they are necessary to show value to all parties. In an ideal case, the pilot leads to a commercial engagement. Pilots slow down commercialization, but I’ve yet to find one that didn’t yield useful insights given the time spent, if success is clearly defined in the early stages.
On identifying success, here’s what I recommend for all proof-of-concept studies and pilots:
Proof-of-concept studies are expensive but sometimes necessary engagements–especially for early stage health technology products. They are crucial for innovations in demonstrating value and informing refinement.
The notion of buying promising early-stage health IT may be a source of discomfort for health plans and healthcare providers, but there are ways to mitigate risk and maximize benefit from cutting-edge innovation.
My advice to any organization embarking on one of these engagements is to build a process for jointly developing, agreeing upon, and evaluating success metrics. This should be done in an open and transparent fashion with the client, and should begin early in the process.
By: Wil Yu, VP of Business Development
We talk about artificial intelligence (AI), robots, and machine learning as if they’re coming soon, or are just some tech pipe dream. They’re not. They’re here today.
In fact, a special report from Bank of America, Merrill Lynch predicts the global market for AI and robots will be just under $153 billion by 2020, and some industries will experience up to a 30% productivity increase through the use of those technologies alone.
That’s not a century from now; it’s not even a decade. It’s just three short years away. That can either terrify you if you’ve seen too many sci-fi films, or excite you if you consider the upside and benefits it could yield.
The reality probably lies somewhere in the middle. Positives and negatives, good and bad. There will be disruption – there will be jobs and perhaps even whole industries that see massive displacement from robots and other “intelligent” machines.
And that says nothing of the inherent risk associated with creating something capable of logical thinking without emotion. The robots may not rise up and exterminate humanity any time soon, but the development of true AI is closer than you think. We already have it to varying degrees. And while computer scientists haven’t yet created a truly sentient artificial being, their work in the field is already having tremendous impacts on several industries.
Click HERE to read the full article by Gary Read.
This article was originally published in the Journal of mHealth magazine in the February / March edition on Artificial Intelligence in Healthcare.
As the future of the Affordable Care Act hangs on the balance, insurance companies are bracing for what comes next. A recent paper issued by the Urban Institute explores the implications of various repeal scenarios on insurers, which range from market withdrawals, destabilization and increased premiums. In the midst of every scenario, there is one thing that will not only remain consistent, it will be paramount: adjusting to the changes without losing ground requires that insurers transform their business processes to remain efficient and competitive while they switch to the new world order.
“At Lumiata, we believe that insurance companies, by the very nature of their operations and influence, are poised to lead the way in adoption of AI, and our conversations with payer executives reinforce this belief.”
With the explosion of data and significant advancements in machine learning, the best means to transformation is through artificial intelligence (AI). But AI can be an ambiguous concept that conjures confusion, awe anxiety and excitement. In an industry that has been a relatively late adopter of AI, where does one begin?
It first starts with understanding current, demonstrable applications. AI is not magic. With the breadth of labor-intensive administrative tasks in healthcare, there are specific AI applications that have the potential to impact revenue, cost, and business decision-making. It also requires a mind shift that actively seeks a new, innovative way of operating: from business-as-usual to business-for-the-future; from slow-moving transactions to fast, efficient, automated interactions; from blunt analytics to precise insights; from one-size-fits-all outreach to highly personalized, nimble engagement and care.
At Lumiata, we believe that insurance companies, by the very nature of their operations and influence, are poised to lead the way in adoption of AI, and our conversations with payer executives reinforce this belief. Insurance companies are one of the primary fulcrums in healthcare delivery and access. Their business processes and decision-making have direct implications on the entire value chain; applying AI to these two specific areas could swing the pendulum toward better, more affordable healthcare.
Insurance companies have also been dealing with large data sets for years. Their back-offices – the main sites of data aggregation – are fertile ground for AI to make an impact. They are overloaded with costly and labor-intensive tasks, routine and repetitive processes that can be automated, and that require some level of clinical insight. Additionally, insurance companies have been operating under mounting regulatory and policy pressures throughout the Affordable Care Act, forcing them to seek innovative technologies that put their data to work and transform those back-offices into engines of value.
“There are a number of business processes in those back-offices where AI could be transformative: customer service, care coordination, prior authorization, predictive analytics, underwriting, to name a few.”
During that same period, there has also been impressive progress in computer science, particularly in machine learning, computer vision, natural language processing, speech recognition and robotics – technologies that could be game-changers for business processes in health plans. With these forcing functions already in place, the payer back-office – that dark place laden with heavy costs – is perhaps the greatest point of opportunity and impact for AI. There are a number of business processes in those back-offices where AI could be transformative : customer service, care coordination, prior authorization, predictive analytics, underwriting, to name a few.
Pilot projects by researchers and a few health plans point to high-value, low cost experiments that can result in demonstrable value. For example, prior authorization – a costly and time-consuming process that can cost a large health plan up to $90 million annually – can be automated using machine learning, which provides recommendations within seconds as opposed to three to five days when performed manually. Virtual health apps that use text-to-speech and search technologies are being deployed to improve customer service interactions with members and patients. AI can also help executives improve their business decision-making and strategy through continuous (rather than episodic) risk stratification that helps them gain competitive advantage by revealing precise utilization trends and reimbursement patterns, and driving more timely and cost-effective care coordination and disease management.
Numerous studies are also showing promising accuracy of AI in improving payer-provider collaboration: because AI can tackle the inevitable heterogeneity of data in healthcare and be more clinically precise, it can help payers determine which providers to reach out to for specific and urgent patient outreach. It can also help providers be more precise with their care choices, coordinating care in a way that reduces waste, and align priorities between payers, providers and care managers.
“AI can also help executives improve their business decision-making and strategy through continuous (rather than episodic) risk stratification.”
As we look to the future, AI can help payers thrive in the midst of a lot of uncertainty. We foresee three trends that payers are likely to encounter in the coming year, and the role of AI in helping them pave the path toward better, more affordable healthcare.
These trends are not new, but in the new policy environment, they will demand greater leadership from insurers. And as always, in times of change, it’s either evolve or go extinct. For businesses today, that means embracing new technology and AI or getting left behind with legacy systems. At a time when the future of the industry is once again shrouded in uncertainty, practical AI applications that are ripe for deployment can empower organizations to focus on growing their business, rather than playing catch up and administering it. If payers lead the way in adopting AI, it can help them unlock their potential to be the platforms of tomorrow’s healthcare, in spite of the uncertainty that lies ahead.